Commit 56c2814c authored by Simon Ciranka's avatar Simon Ciranka

Update LiklelihoodUpdateBeta.md

parent daf1a314
......@@ -45,6 +45,8 @@ parameters?
Make the cumulative Densitiy
============================
```r
posResp = seq(0,1,0.01);
posResp = round(posResp,2);
#sigma=0.1
......@@ -63,6 +65,8 @@ Make the cumulative Densitiy
m = p*possibleResp;
return(list(p,m))
}
```
Sequential Updating
-------------------
......@@ -94,10 +98,14 @@ EACH piece of new information. By this we can allow the participants
estimate to deviate from Bayes optimal. A Bayes optimal would have an
*ω* of 1; underweighting of new information would be a value smaller
than 1 and overweighting larger than 1.
*P**o**s**t**e**r**i**o**r*<sub>*i*</sub><sup>*p*(*b**l**u**e*)</sup> ∼ *B**e**t**a*(*α* + *ω*\**k*, *β* + *ω*\* (*N* − *k*))
The mean of this posterior is then compared to the actual probability
estimate of the participants to create logliks.
```r
linearWeights<- function(v){
sigma=1;
lr<-v[1]
......@@ -155,6 +163,7 @@ estimate of the participants to create logliks.
G2 <- -2*G2
G2
}
```
Exponential Discount Factor.
----------------------------
......@@ -162,11 +171,15 @@ Exponential Discount Factor.
This is the model as I understood it from your mail. Insted having a
linear updateing like before i sum over all bits of information and put
it to the power of some discount factor *δ*
*P**o**s**t**e**r**i**o**r*<sub>*p*(*b**l**u**e*)</sub> ∼ *B**e**t**a*(*α* + *k*<sup>*δ*</sup>, *β* + (*N* − *k*)<sup>*δ*</sup>)
This assumes a more holistic processing but i think it may actually be
a good heuristic given that the stimuli are presented such a short
amount of time.
```r
exponentialWeights<- function(v){
sigma=1;
lr<-v[1]
......@@ -223,6 +236,7 @@ amount of time.
G2 <- -2*G2
G2
}
```
Data Loading
------------
......@@ -231,16 +245,22 @@ In this Chunk of Code i load the Data which i made with first loading
the rawdata in matblab, and squeezing the struct into two dimensional
data and run the script [01\_makeDataFrame.R](01_makeDataFrame.R)
```r
load("RobertsMarbleDf.RData")
data$sub.id<-as.numeric(data$sub.id)
Subs<-unique(data$sub.id)
data$sequence.marbles.color1<-as.character(data$sequence.marbles.color1) #blue
data$sequence.marbles.color2<-as.character(data$sequence.marbles.color2) #red
sub.list<-list()
```
Here i Fit the Simple LearningRate Model.
-----------------------------------------
```r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
output<-optim(c(1), fn = linearWeights, method = c("Brent"),upper = 5,lower = 0)
......@@ -266,10 +286,14 @@ Here i Fit the Simple LearningRate Model.
data$learningRateLinear[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$lr
data$LLLinear[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Here i Fit the Discount LearningRate Model.
-------------------------------------------
```r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
output<-optim(c(1), fn = exponentialWeights, method = c("Brent"),upper = 10,lower = 0)
......@@ -295,6 +319,8 @@ Here i Fit the Discount LearningRate Model.
data$learningRateExp[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$delta
data$LLExp[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Model Comparison
----------------
......@@ -304,6 +330,8 @@ Model”, the “sequential Updating” and the “exponential discounting”
model. The discounting model and but the HGF Like Model are quite close.
Seqential Updating is bad.
```r
data %>% gather( key = ModelLik, value = GSquared, LLLinear, LLExp) %>%
distinct(GSquared,ModelLik) %>%
ggplot(aes(x=as.factor(ModelLik),y=GSquared,color=as.factor(ModelLik)))+
......@@ -315,6 +343,7 @@ Seqential Updating is bad.
breaks = c("LLLinear", "LLExp"),
labels = c("Linear Weight Beta Update", " Exponential Weight Beta Update"))+
my_theme
```
![](LiklelihoodUpdateBeta_files/figure-markdown_strict/unnamed-chunk-1-1.png)
......@@ -327,14 +356,14 @@ This plot shows how the learning rates are distributed in all subjects.
We can see that most of the subjects seem to overweight new pieces of
information relative to an Ideal observer. The pattern which we had
before remains. If i use the linear weighting, it looks like new
information is overweighted.
information is **underweighted**.
![](LiklelihoodUpdateBeta_files/figure-markdown_strict/ShowPlot-1.png)
### Marble Estimate Distribution
What i get now if i look at the parameter estimate of the exponential
model is an **underweghting** of all information considered.
model is again an **underweghting** of all information considered.
![](LiklelihoodUpdateBeta_files/figure-markdown_strict/Show%20second%20Plot-1.png)
......@@ -346,6 +375,8 @@ Marble data and get the alpha and beta parameters that i need to
calculate all other fancy things. Yeah it works but i´m breaking my head
over the visualization. How Can i Visualize it so that it makes sense?
```r
GetShapeParamsExponentialWeights<- function(){
for (i in 1:nrow(subjectLevel)){
#reset shape paramters
......@@ -389,12 +420,16 @@ over the visualization. How Can i Visualize it so that it makes sense?
AllParams<-rbind(AllParams,data2)
}
```
Eventhough i still dont know how i can visualize this in a meaningful
way, you can look at the Distributions for Each Trail per Subject. For
this i build a new Dataframe that contains the probability Densities
which result from the fitted Parameters that i obtain above in a long
format. This Helps me at least to identify bad ModelFits and Subjects.
```r
library(ggfortify)
library(tidyverse)
x<-seq(0,1,0.01)
......@@ -431,5 +466,6 @@ format. This Helps me at least to identify bad ModelFits and Subjects.
my_theme
print(p)
}
```
![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-1.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-2.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-3.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-4.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-5.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-6.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-7.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-8.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-9.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-10.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-11.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-12.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-13.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-14.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-15.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-16.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-17.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-18.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-19.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-20.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-21.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-22.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-23.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-24.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-25.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-26.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-27.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-28.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-29.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-30.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-31.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-32.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-33.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-34.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-35.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-36.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-37.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-38.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-39.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-40.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-41.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-42.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-43.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-44.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-45.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-46.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-47.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-48.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-49.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-50.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-51.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-52.png)![](LiklelihoodUpdateBeta_files/figure-markdown_strict/L-53.png)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment