Commit 2ae4e77c authored by Simon Ciranka's avatar Simon Ciranka

aha

parent cb90e933
......@@ -2,7 +2,7 @@
title: "Bayesian Learning Marbles"
author: "Simon"
date: "9/5/2018"
output: md_document
output: html_document
---
```{r setup, include=FALSE}
......@@ -40,14 +40,14 @@ posResp = round(posResp,2);
#sigma=0.1
#here i define a function that gives us the probability densitiy of all possible values given the mean and the standart deviation
#of the posterior beta distribution that results from binomal updating.
discretenormalMarble005<-function (possibleResp, mean, stdev){
discretenormalMarble005<-function (possibleResp, a, b){
lower = min(possibleResp);
upper = max(possibleResp);
top = seq((lower+0.005),(upper+0.005),0.01);
bottom = seq((lower-0.005),(upper-0.005),0.01);
unp = pnorm(top,mean,stdev) - pnorm(bottom,mean,stdev);
unp = pbeta(top,a,b) - pbeta(bottom,a,b);
unp = 0.9998*unp+0.0001;
p = unp/(sum(unp));
m = p*possibleResp;
......@@ -75,8 +75,8 @@ linearWeights<- function(v){
lr<-v[1]
for (i in 1:nrow(subjectLevel)){
#reset shape paramters
aBeta<-0 # number of sucesses
bBeta<-0 # number of losses
aBeta<-1 # number of sucesses
bBeta<-1 # number of losses
#preallocate internal variables
mu<-rep(0,nrow(subjectLevel))
sigma<-rep(0,nrow(subjectLevel))
......@@ -107,8 +107,8 @@ linearWeights<- function(v){
# get pr and m. I need to dig into what this means exactly.
# i use the estiamted mean and sigma to be able to get descrete log likelihoods.
pr = discretenormalMarble005(posResp,mu, 0.1)[[1]];
m = discretenormalMarble005(posResp,mu, 0.1)[[2]];
pr = discretenormalMarble005(posResp,aBeta, bBeta)[[1]];
m = discretenormalMarble005(posResp,aBeta, bBeta)[[2]];
#check where the possible resposnes equal the exact response of the subject. now we have the response liklihood.
probEstimate = pr[which(posResp==subEstimate)]; # SEE HERE YOU FIND THE PROB EST THAT FITS THE GIVEN SUBJECTIVE VALUE
#zero doesnt work.
......@@ -173,8 +173,8 @@ exponentialWeights<- function(v){
# get pr and m. I need to dig into what this means exactly.
# i use the estiamted mean and sigma to be able to get descrete log likelihoods.
pr = discretenormalMarble005(posResp,mu, 0.1)[[1]];
m = discretenormalMarble005(posResp,mu, 0.1)[[2]];
pr = discretenormalMarble005(posResp,aBeta, bBeta)[[1]];
m = discretenormalMarble005(posResp,aBeta, bBeta)[[2]];
#check where the possible resposnes equal the exact response of the subject. now we have the response liklihood.
probEstimate = pr[which(posResp==subEstimate)]; # SEE HERE YOU FIND THE PROB EST THAT FITS THE GIVEN SUBJECTIVE VALUE
#zero doesnt work.
......@@ -284,7 +284,7 @@ data %>% gather( key = ModelLik, value = GSquared, LLLinear, LLExp) %>%
scale_color_discrete(
name="Models",
breaks = c("LLLinear", "LLExp"),
labels = c("Exponential Weight Beta Update", " Weight Beta Update"))+
labels = c("Linear Weight Beta Update", " Exponential Weight Beta Update"))+
my_theme
```
......@@ -294,14 +294,14 @@ data %>% gather( key = ModelLik, value = GSquared, LLLinear, LLExp) %>%
library(ggplot2)
learningPlot<-ggplot(,aes(x=1,y=unique(data$learningRateLinear)))+geom_violin(color="red",fill="red",alpha=0.1)+geom_jitter(width = 0.2)+geom_boxplot(width=0.1, alpha=0.5)+
learningPlot<-ggplot(,aes(x=1,y=unique(data$learningRateLinear)))+geom_violin(color="blue",fill="blue",alpha=0.1)+geom_jitter(width = 0.2)+geom_boxplot(width=0.1, alpha=0.5)+
ggtitle("LearningRate Distribution Marbles")+
ylab("Learning Rate Estimate: Linear")+
xlab("Data Density")+geom_hline(yintercept = 1) +
annotate("text", x=0.65, y=1, vjust = -1, label = "Ideal Observer")+
my_theme
learningDiscounting<-ggplot(,aes(x=1,y=unique(data$learningRateExp)))+geom_violin(color="blue",fill="blue",alpha=0.1)+geom_jitter(width = 0.2)+geom_boxplot(width=0.1, alpha=0.5)+
learningDiscounting<-ggplot(,aes(x=1,y=unique(data$learningRateExp)))+geom_violin(color="red",fill="red",alpha=0.1)+geom_jitter(width = 0.2)+geom_boxplot(width=0.1, alpha=0.5)+
ggtitle("LearningRate Distribution Marbles")+
ylab("Learning Rate Estimate: Exponential")+
xlab("Data Density")+geom_hline(yintercept = 1) +
......
This diff is collapsed.
This diff is collapsed.
......@@ -4,7 +4,7 @@ Implementing Binomal Updating in Wouters way.
These code chunks and results are implementing the models in wouters
mail form the 11.9.2018. Other than before, we now caluculate the log
likelihood of our participants estimates by assuming that the range of
possible estimates is normally distributed. The probability density
possible estimates is *beta* distributed. The probability density
under the specific probability estimate of the participant is now the
likelihood. For this i need a function that returns discrete values for
the probability denisty in question.
......@@ -14,11 +14,7 @@ when you talk about the arguments that you pass to the cumulative
densitiy function which you use to get a log likelihood; you say sigma
is sigma. In the example you sent to me; you do not use sigma as the
variance estimate of the beta updating before. You pass sigma with the
function one higher level that you then minimize. **Is sigma as you use it
a free parameter of the model as well?** So you dont have one but two free
parameters?
discretenormalMarble005(posResp,model_weight, sigma); % THIS IS THE CUSTOM FUNCTION WE BUILT FOR THIS STEP SEE BELOW (posResp is list made above. I guess of all possible resonses a subject can make).. sigma is sigma..
function one higher level that you then minimize.
Make the cumulative Densitiy
============================
......@@ -29,15 +25,14 @@ Make the cumulative Densitiy
#sigma=0.1
#here i define a function that gives us the probability densitiy of all possible values given the mean and the standart deviation
#of the posterior beta distribution that results from binomal updating.
discretenormalMarble005<-function (possibleResp, mean, stdev){
discretenormalMarble005<-function (possibleResp, a, b){
lower = min(possibleResp);
upper = max(possibleResp);
top = seq((lower+0.005),(upper+0.005),0.01);
bottom = seq((lower-0.005),(upper-0.005),0.01);
unp = pnorm(top,mean,stdev) - pnorm(bottom,mean,stdev);
unp = pbeta(top,a,b) - pbeta(bottom,a,b);
unp = 0.9998*unp+0.0001;
p = unp/(sum(unp));
m = p*possibleResp;
......@@ -80,14 +75,13 @@ than 1 and overweighting larger than 1.
estimate of the participants to create logliks.
```r
linearWeights<- function(v){
sigma=1;
lr<-v[1]
for (i in 1:nrow(subjectLevel)){
#reset shape paramters
aBeta<-0 # number of sucesses
bBeta<-0 # number of losses
aBeta<-1 # number of sucesses
bBeta<-1 # number of losses
#preallocate internal variables
mu<-rep(0,nrow(subjectLevel))
sigma<-rep(0,nrow(subjectLevel))
......@@ -118,8 +112,8 @@ estimate of the participants to create logliks.
# get pr and m. I need to dig into what this means exactly.
# i use the estiamted mean and sigma to be able to get descrete log likelihoods.
pr = discretenormalMarble005(posResp,mu, 0.1)[[1]];
m = discretenormalMarble005(posResp,mu, 0.1)[[2]];
pr = discretenormalMarble005(posResp,aBeta, bBeta)[[1]];
m = discretenormalMarble005(posResp,aBeta, bBeta)[[2]];
#check where the possible resposnes equal the exact response of the subject. now we have the response liklihood.
probEstimate = pr[which(posResp==subEstimate)]; # SEE HERE YOU FIND THE PROB EST THAT FITS THE GIVEN SUBJECTIVE VALUE
#zero doesnt work.
......@@ -152,7 +146,6 @@ a good heuristic given that the stimuli are presented such a short
amount of time.
```r
exponentialWeights<- function(v){
sigma=1;
lr<-v[1]
......@@ -189,8 +182,8 @@ amount of time.
# get pr and m. I need to dig into what this means exactly.
# i use the estiamted mean and sigma to be able to get descrete log likelihoods.
pr = discretenormalMarble005(posResp,mu, 0.1)[[1]];
m = discretenormalMarble005(posResp,mu, 0.1)[[2]];
pr = discretenormalMarble005(posResp,aBeta, bBeta)[[1]];
m = discretenormalMarble005(posResp,aBeta, bBeta)[[2]];
#check where the possible resposnes equal the exact response of the subject. now we have the response liklihood.
probEstimate = pr[which(posResp==subEstimate)]; # SEE HERE YOU FIND THE PROB EST THAT FITS THE GIVEN SUBJECTIVE VALUE
#zero doesnt work.
......@@ -257,12 +250,13 @@ Here i Fit the Simple LearningRate Model.
data$learningRateLinear[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$lr
data$LLLinear[i] = toMerge[toMerge$PPN == data$sub.id[i], ]$LL_win
}
```
Here i Fit the Discount LearningRate Model.
-------------------------------------------
``` r
```r
for (i in 1:length(Subs)){
subjectLevel<-data[data$sub.id==Subs[i],]
output<-optim(c(1), fn = exponentialWeights, method = c("Brent"),upper = 10,lower = 0)
......@@ -308,28 +302,22 @@ Seqential Updating is bad.
scale_color_discrete(
name="Models",
breaks = c("LLLinear", "LLExp"),
labels = c("Exponential Weight Beta Update", " Weight Beta Update"))+
labels = c("Linear Weight Beta Update", " Exponential Weight Beta Update"))+
my_theme
```
![](LiklelihoodUpdate_files/figure-markdown_strict/unnamed-chunk-1-1.png)
![](LiklelihoodUpdateBeta_files/figure-markdown_strict/unnamed-chunk-1-1.png)
So now lets look at the learning rates.
---------------------------------------
### Marble Estimate Distribution
This plot shows how the learning rates are distributed in all subjects.
We can see that most of the subjects seem to overweight new pieces of
information relative to an Ideal observer. The pattern which we had
before remains. If i use the linear weighting, it looks like new
information is overweighted.
Now that we use proper beta updating it actually does fit better. Both parameter estiamtes hint to an
*underweighting* of new pieces of information.
![](LiklelihoodUpdate_files/figure-markdown_strict/ShowPlot-1.png)
![](LiklelihoodUpdateBeta_files/figure-markdown_strict/ShowPlot-1.png)
### Marble Estimate Distribution
What i get now if i look at the parameter estimate of the exponential
model is an **underweghting** of all information considered.
![](LiklelihoodUpdate_files/figure-markdown_strict/Show%20second%20Plot-1.png)
![](LiklelihoodUpdateBeta_files/figure-markdown_strict/Show%20second%20Plot-1.png)
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment