Commit a1fc53f0 authored by Simon Ciranka's avatar Simon Ciranka

the Markdown file is always messed up

parent f5275bbe
......@@ -21,6 +21,8 @@ modify here to only have two levels. I will also use the delta like
learning rules where uncertainty about the true outcome distribution can
be interpreted as a learning rate.
```r
HGF_Like<- function(v){
rho<-v[1]
......@@ -71,7 +73,8 @@ be interpreted as a learning rate.
#Upper bounds - I also had beta max 1 min -1 as in your script but I wonder could be even more ambig averse so I think larger range is also fine...
G2
}
```
Sequential Updating
-------------------
......@@ -100,10 +103,13 @@ EACH piece of new information. By this we can allow the participants
estimate to deviate from Bayes optimal. A Bayes optimal would have an
*ω* of 1; underweighting of new information would be a value smaller
than 1 and overweighting larger than 1.
*P**o**s**t**e**r**i**o**r*<sub>*i*</sub><sup>*p*(*b**l**u**e*)</sup> ∼ *B**e**t**a*(*α* + *ω*\**k*, *β* + *ω*\* (*N* − *k*))
*P**o**s**t**e**r**i**o**r*<sub>*i*</sub><sup>*p*(*b**l**u**e*)</sup> ∼ *B**e**t**a*(*α* + *ω*\**k*, *β* + *ω*\* (*N* − *k*))*
The mean of this posterior is then compared to the actual probability
estimate of the participants to create logliks.
```r
simpleLearningRate<- function(v){
lr<-v[1]
for (i in 1:nrow(subjectLevel)){
......@@ -136,6 +142,7 @@ estimate of the participants to create logliks.
#Upper bounds - I also had beta max 1 min -1 as in your script but I wonder could be even more ambig averse so I think larger range is also fine...
G2
}
```
Exponential Discount Factor.
----------------------------
......@@ -148,6 +155,8 @@ put it to the power of some discount factor *δ*
a good heuristic given that the stimuli are presented such a short
amount of time.
```r
holisticDiscount<- function(v){
delta<-v[1]
for (i in 1:nrow(subjectLevel)){
......@@ -176,6 +185,8 @@ amount of time.
#Upper bounds - I also had beta max 1 min -1 as in your script but I wonder could be even more ambig averse so I think larger range is also fine...
G2
}
```
Data Loading
------------
......@@ -184,12 +195,16 @@ In this Chunk of Code i load the Data which i made with first loading
the rawdata in matblab, and squeezing the struct into two dimensional
data and run the script [01\_makeDataFrame.R](01_makeDataFrame.R)
```r
load("RobertsMarbleDf.RData")
data$sub.id<-as.numeric(data$sub.id)
Subs<-unique(data$sub.id)
data$sequence.marbles.color1<-as.character(data$sequence.marbles.color1) #blue
data$sequence.marbles.color2<-as.character(data$sequence.marbles.color2) #red
sub.list<-list()
```
Model Fitting
-------------
......@@ -197,6 +212,8 @@ Model Fitting
In the Following I fit the Model with Rs Optim function and store the
fitted Parameters in the same dataFrame
```r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
output<-optim(c(1), fn = HGF_Like, method = c("Brent"),upper = 10,lower = 0)
......@@ -223,9 +240,13 @@ fitted Parameters in the same dataFrame
data$LLHGF[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Here i Fit the Simple LearningRate Model.
-----------------------------------------
```r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
output<-optim(c(1), fn = simpleLearningRate, method = c("Brent"),upper = 5,lower = 0)
......@@ -251,9 +272,12 @@ Here i Fit the Simple LearningRate Model.
data$learningRateSimple[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$lr
data$LLSimple[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Here i Fit the Discount LearningRate Model.
-------------------------------------------
```r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
......@@ -281,6 +305,8 @@ Here i Fit the Discount LearningRate Model.
data$LLDisc[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Model Comparison
----------------
......@@ -289,6 +315,8 @@ Model”, the “sequential Updating” and the “exponential discounting”
model. The discounting model and but the HGF Like Model are quite close.
Seqential Updating is bad.
```r
data %>% gather( key = ModelLik, value = GSquared,LLSimple, LLHGF, LLDisc) %>%
distinct(GSquared,ModelLik) %>%
ggplot(aes(x=as.factor(ModelLik),y=GSquared,color=as.factor(ModelLik)))+
......@@ -300,7 +328,8 @@ Seqential Updating is bad.
breaks = c("LLDisc", "LLHGF", "LLSimple"),
labels = c("Exponential Discount", "HGF Like Belief Update", "Weighted Beta Update"))+
my_theme
```
![](HalfHGF_files/figure-markdown_strict/unnamed-chunk-1-1.png)
So now lets look at the learning rates.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment