diff --git a/News_For_Wouter/Sep_2018/README.md b/News_For_Wouter/Sep_2018/README.md
index 299f39414e3428dd9ec216e3b96abe90d2738e96..2e54ee8b33e053e2a9efe05b0a9a15821a9ae0d4 100644
--- a/News_For_Wouter/Sep_2018/README.md
+++ b/News_For_Wouter/Sep_2018/README.md
@@ -1,4 +1,4 @@
-Marbles in Roberts data
+# Marbles in Roberts data
=======================
Here I introduce the three possible learing models that i implemented. I
@@ -7,19 +7,20 @@ Reinforcement learning like belief updating with a delta rule. Then i
implement a sequential Beta updating model and later the Exponential
Discount factor model which we discussed in todays mails.
-HGF like updating with only one free paramter.
+## HGF like updating with only one free paramter.
==============================================
I did this before our mails, so i thought i just leave it here. I am
-trying something new here the probability of the Binomial Outcome
+trying something new. The probability of the Binomial Outcome
distribution in not assumed to be beta, but normally distributed.
-Binomal and Normals are not Conjugate so Neural population encoding also
-encodes via approximate normals, so if we want to find neural correlates
-of uncertainty maybe such a model would be more adequate. Mathys
-proposed the HGF as a learning model under uncertainty which i slightly
-modify here to only have two levels. I will also use the delta like
+The good thing here is, that we dont rely on the assumption that at some point in the information processing
+the bits are made discrete, then somehow updated into a probability distribution.
+We have a continous representation and Reenforcement learning like update rules.
+
+Mathys proposed the HGF as a learning model under uncertainty which i slightly
+modify here to only have two levels and no coupling parameter. I will also use the delta like
learning rules where uncertainty about the true outcome distribution can
-be interpreted as a learning rate.
+be interpreted as an adaptive learning rate.
```r
@@ -32,8 +33,6 @@ be interpreted as a learning rate.
priorMu=0.5;
priorSig=1;
obs=NA;# make an array
- #hazard=1;
- # here i need to make my outcomes sequential.
red<-strsplit(subjectLevel$sequence.marbles.color2[i],"")
red<-as.numeric(unlist(red))#prepre the array
@@ -75,7 +74,7 @@ be interpreted as a learning rate.
}
```
-Sequential Updating
+## Sequential Updating
-------------------
In this model each piece of evidence is weighted sequentially in the
@@ -144,7 +143,7 @@ estimate of the participants to create logliks.
}
```
-Exponential Discount Factor.
+## Exponential Discount Factor.
----------------------------
This is the model as I understood it from your mail. Insted having a
@@ -191,7 +190,7 @@ amount of time.
```
Ok. so far so good. In the Following i am going to fit these models the the Behavioral data of the "Entscheidungs" experiment.
-Data Loading
+### Data Loading
------------
In this Chunk of Code i load the Data which i made with first loading
@@ -209,7 +208,7 @@ data and run the script [01\_makeDataFrame.R](01_makeDataFrame.R)
```
-Model Fitting
+### Model Fitting
-------------
In the Following I fit the Model with Rs Optim function and store the
@@ -245,7 +244,7 @@ fitted Parameters in the same dataFrame
```
-Here i Fit the Simple LearningRate Model.
+#### Here i Fit the Simple LearningRate Model.
-----------------------------------------
```r
@@ -278,7 +277,7 @@ Here i Fit the Simple LearningRate Model.
```
-Here i Fit the Discount LearningRate Model.
+#### Here i Fit the Discount LearningRate Model.
-------------------------------------------
```r
@@ -310,7 +309,7 @@ Here i Fit the Discount LearningRate Model.
```
-Model Comparison
+#### Model Comparison
----------------
Here i judge via G^2 which model is the best. I compare the “HGF Like
@@ -335,7 +334,7 @@ Seqential Updating is bad.
```
![](HalfHGF_files/figure-markdown_strict/unnamed-chunk-1-1.png)
-So now lets look at the learning rates.
+# So now lets look at the learning rates.
---------------------------------------
### Marble Estimate Distribution