Commit cb90e933 authored by Simon Ciranka's avatar Simon Ciranka

i implemented MLE and Wouters version of the Bayesian Updating

parent 002ea0ed
......@@ -296,14 +296,14 @@ library(ggplot2)
learningPlot<-ggplot(,aes(x=1,y=unique(data$learningRateLinear)))+geom_violin(color="red",fill="red",alpha=0.1)+geom_jitter(width = 0.2)+geom_boxplot(width=0.1, alpha=0.5)+
ggtitle("LearningRate Distribution Marbles")+
ylab("Learning Rate Estimate: Sequential")+
ylab("Learning Rate Estimate: Linear")+
xlab("Data Density")+geom_hline(yintercept = 1) +
annotate("text", x=0.65, y=1, vjust = -1, label = "Ideal Observer")+
my_theme
learningDiscounting<-ggplot(,aes(x=1,y=unique(data$learningRateExp)))+geom_violin(color="blue",fill="blue",alpha=0.1)+geom_jitter(width = 0.2)+geom_boxplot(width=0.1, alpha=0.5)+
ggtitle("LearningRate Distribution Marbles")+
ylab("Learning Rate Estimate: Sequential")+
ylab("Learning Rate Estimate: Exponential")+
xlab("Data Density")+geom_hline(yintercept = 1) +
annotate("text", x=0.65, y=1, vjust = -1, label = "Ideal Observer")+
my_theme
......
......@@ -14,16 +14,15 @@ when you talk about the arguments that you pass to the cumulative
densitiy function which you use to get a log likelihood; you say sigma
is sigma. In the example you sent to me; you do not use sigma as the
variance estimate of the beta updating before. You pass sigma with the
function one higher level that you then minimize. **Is sigma as you use it
a free parameter of the model as well?** So you dont have one but two free
function one higher level that you then minimize. Is sigma as you use it
a free parameter of the model as well? So you dont have one but two free
parameters?
discretenormalMarble005(posResp,model_weight, sigma); % THIS IS THE CUSTOM FUNCTION WE BUILT FOR THIS STEP SEE BELOW (posResp is list made above. I guess of all possible resonses a subject can make).. sigma is sigma..
discretenormalMarble005(posResp,model_weight, sigma); % THIS IS THE CUSTOM FUNCTION WE BUILT FOR THIS STEP SEE BELOW (posResp is list made above. I guess of all possible resonses a subject can make).. sigma is sigma..
Make the cumulative Densitiy
============================
```r
posResp = seq(0,1,0.01);
posResp = round(posResp,2);
#sigma=0.1
......@@ -33,7 +32,6 @@ Make the cumulative Densitiy
lower = min(possibleResp);
upper = max(possibleResp);
top = seq((lower+0.005),(upper+0.005),0.01);
bottom = seq((lower-0.005),(upper-0.005),0.01);
......@@ -43,7 +41,6 @@ Make the cumulative Densitiy
m = p*possibleResp;
return(list(p,m))
}
```
Sequential Updating
-------------------
......@@ -79,8 +76,6 @@ than 1 and overweighting larger than 1.
The mean of this posterior is then compared to the actual probability
estimate of the participants to create logliks.
```r
linearWeights<- function(v){
sigma=1;
lr<-v[1]
......@@ -138,7 +133,6 @@ estimate of the participants to create logliks.
G2 <- -2*G2
G2
}
```
Exponential Discount Factor.
----------------------------
......@@ -151,8 +145,6 @@ it to the power of some discount factor *δ*
a good heuristic given that the stimuli are presented such a short
amount of time.
```r
exponentialWeights<- function(v){
sigma=1;
lr<-v[1]
......@@ -209,7 +201,6 @@ amount of time.
G2 <- -2*G2
G2
}
```
Data Loading
------------
......@@ -218,20 +209,16 @@ In this Chunk of Code i load the Data which i made with first loading
the rawdata in matblab, and squeezing the struct into two dimensional
data and run the script [01\_makeDataFrame.R](01_makeDataFrame.R)
```r
load("RobertsMarbleDf.RData")
data$sub.id<-as.numeric(data$sub.id)
Subs<-unique(data$sub.id)
data$sequence.marbles.color1<-as.character(data$sequence.marbles.color1) #blue
data$sequence.marbles.color2<-as.character(data$sequence.marbles.color2) #red
sub.list<-list()
```
Here i Fit the Simple LearningRate Model.
-----------------------------------------
```r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
output<-optim(c(1), fn = linearWeights, method = c("Brent"),upper = 5,lower = 0)
......@@ -257,12 +244,10 @@ Here i Fit the Simple LearningRate Model.
data$learningRateLinear[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$lr
data$LLLinear[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Here i Fit the Discount LearningRate Model.
-------------------------------------------
``` r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
output<-optim(c(1), fn = exponentialWeights, method = c("Brent"),upper = 10,lower = 0)
......@@ -288,7 +273,6 @@ Here i Fit the Discount LearningRate Model.
data$learningRateExp[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$delta
data$LLExp[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Model Comparison
----------------
......@@ -298,7 +282,6 @@ Model”, the “sequential Updating” and the “exponential discounting”
model. The discounting model and but the HGF Like Model are quite close.
Seqential Updating is bad.
```r
data %>% gather( key = ModelLik, value = GSquared, LLLinear, LLExp) %>%
distinct(GSquared,ModelLik) %>%
ggplot(aes(x=as.factor(ModelLik),y=GSquared,color=as.factor(ModelLik)))+
......@@ -310,7 +293,6 @@ Seqential Updating is bad.
breaks = c("LLLinear", "LLExp"),
labels = c("Exponential Weight Beta Update", " Weight Beta Update"))+
my_theme
```
![](LiklelihoodUpdate_files/figure-markdown_strict/unnamed-chunk-1-1.png)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment