Commit 1e6050b8 authored by Simon Ciranka's avatar Simon Ciranka

the Markdown file is always messed up

parent 7a6a0985
Marbles in Roberts data # Marbles in Roberts data
======================= =======================
Here I introduce the three possible learing models that i implemented. I Here I introduce the three possible learing models that i implemented. I
...@@ -7,19 +7,20 @@ Reinforcement learning like belief updating with a delta rule. Then i ...@@ -7,19 +7,20 @@ Reinforcement learning like belief updating with a delta rule. Then i
implement a sequential Beta updating model and later the Exponential implement a sequential Beta updating model and later the Exponential
Discount factor model which we discussed in todays mails. Discount factor model which we discussed in todays mails.
HGF like updating with only one free paramter. ## HGF like updating with only one free paramter.
============================================== ==============================================
I did this before our mails, so i thought i just leave it here. I am I did this before our mails, so i thought i just leave it here. I am
trying something new here the probability of the Binomial Outcome trying something new. The probability of the Binomial Outcome
distribution in not assumed to be beta, but normally distributed. distribution in not assumed to be beta, but normally distributed.
Binomal and Normals are not Conjugate so Neural population encoding also The good thing here is, that we dont rely on the assumption that at some point in the information processing
encodes via approximate normals, so if we want to find neural correlates the bits are made discrete, then somehow updated into a probability distribution.
of uncertainty maybe such a model would be more adequate. Mathys We have a continous representation and Reenforcement learning like update rules.
proposed the HGF as a learning model under uncertainty which i slightly
modify here to only have two levels. I will also use the delta like Mathys proposed the HGF as a learning model under uncertainty which i slightly
modify here to only have two levels and no coupling parameter. I will also use the delta like
learning rules where uncertainty about the true outcome distribution can learning rules where uncertainty about the true outcome distribution can
be interpreted as a learning rate. be interpreted as an adaptive learning rate.
```r ```r
...@@ -32,8 +33,6 @@ be interpreted as a learning rate. ...@@ -32,8 +33,6 @@ be interpreted as a learning rate.
priorMu=0.5; priorMu=0.5;
priorSig=1; priorSig=1;
obs=NA;# make an array obs=NA;# make an array
#hazard=1;
# here i need to make my outcomes sequential.
red<-strsplit(subjectLevel$sequence.marbles.color2[i],"") red<-strsplit(subjectLevel$sequence.marbles.color2[i],"")
red<-as.numeric(unlist(red))#prepre the array red<-as.numeric(unlist(red))#prepre the array
...@@ -75,7 +74,7 @@ be interpreted as a learning rate. ...@@ -75,7 +74,7 @@ be interpreted as a learning rate.
} }
``` ```
Sequential Updating ## Sequential Updating
------------------- -------------------
In this model each piece of evidence is weighted sequentially in the In this model each piece of evidence is weighted sequentially in the
...@@ -144,7 +143,7 @@ estimate of the participants to create logliks. ...@@ -144,7 +143,7 @@ estimate of the participants to create logliks.
} }
``` ```
Exponential Discount Factor. ## Exponential Discount Factor.
---------------------------- ----------------------------
This is the model as I understood it from your mail. Insted having a This is the model as I understood it from your mail. Insted having a
...@@ -191,7 +190,7 @@ amount of time. ...@@ -191,7 +190,7 @@ amount of time.
``` ```
Ok. so far so good. In the Following i am going to fit these models the the Behavioral data of the "Entscheidungs" experiment. Ok. so far so good. In the Following i am going to fit these models the the Behavioral data of the "Entscheidungs" experiment.
Data Loading ### Data Loading
------------ ------------
In this Chunk of Code i load the Data which i made with first loading In this Chunk of Code i load the Data which i made with first loading
...@@ -209,7 +208,7 @@ data and run the script [01\_makeDataFrame.R](01_makeDataFrame.R) ...@@ -209,7 +208,7 @@ data and run the script [01\_makeDataFrame.R](01_makeDataFrame.R)
``` ```
Model Fitting ### Model Fitting
------------- -------------
In the Following I fit the Model with Rs Optim function and store the In the Following I fit the Model with Rs Optim function and store the
...@@ -245,7 +244,7 @@ fitted Parameters in the same dataFrame ...@@ -245,7 +244,7 @@ fitted Parameters in the same dataFrame
``` ```
Here i Fit the Simple LearningRate Model. #### Here i Fit the Simple LearningRate Model.
----------------------------------------- -----------------------------------------
```r ```r
...@@ -278,7 +277,7 @@ Here i Fit the Simple LearningRate Model. ...@@ -278,7 +277,7 @@ Here i Fit the Simple LearningRate Model.
``` ```
Here i Fit the Discount LearningRate Model. #### Here i Fit the Discount LearningRate Model.
------------------------------------------- -------------------------------------------
```r ```r
...@@ -310,7 +309,7 @@ Here i Fit the Discount LearningRate Model. ...@@ -310,7 +309,7 @@ Here i Fit the Discount LearningRate Model.
``` ```
Model Comparison #### Model Comparison
---------------- ----------------
Here i judge via G^2 which model is the best. I compare the “HGF Like Here i judge via G^2 which model is the best. I compare the “HGF Like
...@@ -335,7 +334,7 @@ Seqential Updating is bad. ...@@ -335,7 +334,7 @@ Seqential Updating is bad.
``` ```
![](HalfHGF_files/figure-markdown_strict/unnamed-chunk-1-1.png) ![](HalfHGF_files/figure-markdown_strict/unnamed-chunk-1-1.png)
So now lets look at the learning rates. # So now lets look at the learning rates.
--------------------------------------- ---------------------------------------
### Marble Estimate Distribution ### Marble Estimate Distribution
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment