manuscript.Rmd 33.9 KB
Newer Older
1
---
linushof's avatar
linushof committed
2
3
title: 'Sampling Strategies in Decisions from Experience'
author: "Linus Hof, Thorsten Pachur, Veronika Zilker"
4
5
6
7
8
9
bibliography: sampling-strategies-in-dfe.bib
output:
  html_document:
    code_folding: hide
    toc: yes
    toc_float: yes
linushof's avatar
linushof committed
10
    number_sections: no
linushof's avatar
linushof committed
11
12
13
  pdf_document:
    toc: yes
csl: apa.csl
linushof's avatar
linushof committed
14
15
16
editor_options: 
  markdown: 
    wrap: sentence
17
18
---

19
20
```{r}
# load packages
linushof's avatar
linushof committed
21
22
pacman::p_load(repro,
               tidyverse,
linushof's avatar
linushof committed
23
24
               knitr, 
               viridis)
25
26
```

27
# Author Note
28

29
This document was created from the commit with the hash `r repro::current_hash()`. 
30

31
32
- Add information on how to reproduce the project.
- Add contact.
linushof's avatar
linushof committed
33

34
# Abstract
linushof's avatar
linushof committed
35

36
37
38
A probability theoretic definition of sampling and a rough stochastic model of the random process underlying decisions from experience are proposed.
It is demonstrated how the stochastic model can be used a) to explicate assumptions about the sampling and decision strategies that agents may apply and b) to derive predictions about the resulting decision behavior in terms of function forms and parameter values.
Synthetic choice data is simulated and modeled in cumulative prospect theory to test these assumptions. 
linushof's avatar
linushof committed
39

40
...
41

linushof's avatar
linushof committed
42
# Introduction
43

44
45
...

46
## Prospects as Probability Spaces
linushof's avatar
linushof committed
47

48
Let a prospect be a *probability space* $(\Omega, \mathscr{F}, P)$ [@kolmogorovFoundationsTheoryProbability1950; @georgiiStochasticsIntroductionProbability2008, for an accessible introduction].
linushof's avatar
linushof committed
49

50
$\Omega$ is the *sample space* containing an at most countable set of possible outcomes 
linushof's avatar
linushof committed
51

52
53
54
$$\begin{equation}
\omega_i = \{\omega_1, ..., \omega_n\} \in \Omega
\end{equation}$$ 
linushof's avatar
linushof committed
55

56
$\mathscr{F}$ is a set of subsets of $\Omega$, i.e., the *event space*
linushof's avatar
linushof committed
57

58
$$\begin{equation}
59
A_i = \{A_1, ..., A_n\} \in \mathscr{F} = \mathscr{P}(\Omega)
60
\end{equation}$$
linushof's avatar
linushof committed
61

62
$\mathscr{P}(\Omega)$ denotes the power set of $\Omega$. 
linushof's avatar
linushof committed
63

64
$P$ is a probability mass function that maps $\mathscr{F}$ to the set of real numbers in $[0, 1]$ 
65
66
67
68
69
70
71
72
73
74
75

$$\begin{equation}
P: \mathscr{F} \mapsto [0,1]
\end{equation}$$

by assigning each $\omega_i \in \Omega$ a probability of $0 \leq p_i \leq 1$ with $P(\Omega) = 1$.

## Random Processes in Sequential Sampling 

In research on the decision theory, a standard paradigm is the choice between $n \geq 2$ monetary prospects (hereafter indexed with j), where $\omega_{ij} \in \Omega_j$ are monetary outcomes, gains and/or losses respectively.
$P_j$ is then the probability measure which assigns each $\omega_{ij}$ a probability with which they occur. 
76
In such a choice paradigm, agents are asked to evaluate the prospects and build a preference for, i.e., choose either one of them. 
77
78
It is common to make a rather crude distinction between two variants of this evaluation process [cf. @hertwigDescriptionExperienceGap2009]. 
For decisions from description (DfD), agents are provided a full symbolic description of the triples $(\Omega, \mathscr{F}, P)_j$.
79
For decision from experience [DfE; e.g., @hertwigDecisionsExperienceEffect2004], the probability triples are not described but must be explored by the means of *sampling*. 
80

81
To provide a formal definition of sampling in risky or uncertain choice, we make use of the mathematical concept of a random variable. 
82
83
84
85
86
87
88
Thus, if for each

$$\begin{equation}
\omega_{i} \in \Omega: p(\omega_{i}) \neq 1
\end{equation}$$

we refer to the respective prospect as *"risky"*, where risky describes the fact that if agents would choose the prospect and any of the outcomes $\omega_{i}$ must occur, none of these outcomes will occur with certainty but according to the probability measure $P$. 
89
90
91
It is acceptable to speak of the occurrence of $\omega_{i}$ as the realization of a random variable iff the following conditions a. and b. are met: 

(a) The random variable $X$ is defined as the function 
92
93
94
95
96

$$\begin{equation}
X: (\Omega, \mathscr{F})  \mapsto (\Omega', \mathscr{F'})
\end{equation}$$

97
98
where the image $\Omega'$ is the set of possible values $X$ can take and $\mathscr{F'}$ is a set of subsets of $\Omega'$.
I.e., $X$ maps any event $A_i \in \mathscr{F}$ to a subset $A'_i \in \mathscr{F'}$ 
99
100

$$\begin{equation}
101
A'_i \in  \mathscr{F'} \Rightarrow X^{-1}A'_i \in \mathscr{F}
102
103
\end{equation}$$

104
105
106
107
108
109
110
[cf. @georgiiStochasticsIntroductionProbability2008].

(b) The image $X: \Omega \mapsto \Omega'$ must be such that $\omega_i \in \Omega = x_i \in \Omega'$. 

Given conditions a. and b., we denote any realization of a random variable defined on the triple $(\Omega, \mathscr{F}, P)$ as a *single sample* of the respective prospect and any systematic approach to generate a sequence of single samples from $n \geq 2$ prospects as a sampling strategy [see also @hillsInformationSearchDecisions2010]. 
Because for a sufficiently large number of single samples from a given prospect the relative frequencies of $\omega_{i}$ approximate their probabilities in $p_i \in P$, sampling in principle allows to explore a prospect's probability space. 

111
So far, we used the probability triple of a prospect and conditions a. and b. solely to provide a probability theoretic definition of a single sample and sequences thereof.
112
However, since in the decision literature the (stochastic) occurrence of the raw outcomes in $\Omega$ is often treated as the event of interest, it should be justified to say that the stochastic model formulated under a. with the restriction b. is abundantly although implicitly assumed to underlie the evaluation processes of agents. 
113
114
115
116
We do not contend that this model is not adequate but rather empirically warranted and mathematically convenient, not least because of the measurable nature of the monetary outcomes in $\Omega$. 
However, in line with the literature that deviates from utility models and its derivatives [@heOntologyDecisionModels2020, for an ontology of decision models], we propose that the above restricted model is not the only suitable for describing the random processes agents are interested in, when building a preference between risky prospects, from sampling respectively.

We can construct an alternative stochastic sampling model (hereafter SSM) underlying DfE between risky prospects by starting from the assumption that agents do not make random choices but base their decisions on the information provided by the prospects, which is readily described by their probability triples. 
117
Thus, we may start rather abstractly by defining a decision variable $D$ 
118
119

$$\begin{equation}
120
D := f((\Omega, \mathscr{F}, P)_j)
121
122
\end{equation}$$

123
first without any further assumption on which information of the probability triple $f$ utilizes and how.
124
Although in principle many models for $f$ are proposed and tested in the decision literature, in DfE we can restrict the SSM to the case where decisions are based on sequences of single samples generated from the prospect triples.
125
Since we have defined the stochastic mechanism for generating such sequences, we write
126

127
128
129
130
$$\begin{equation}
D := f((X: (\Omega, \mathscr{F}) \mapsto (\Omega', \mathscr{F'}))_j)
\end{equation}$$

131
where $\Omega_j = \Omega'_j$. 
132

133
For n prospects, we write 
134
135

$$\begin{equation}
136
D := f(X_1, ..., X_j, ..., X_n)
137
138
\end{equation}$$

139
140
141
142
143
144
145
146
147
In summary, we have defined the decision variable $D$ as a function of the random variables associated with the prospects' probability spaces. 
As such, $f$ is allowed to operate on any quantitative measure related to these random variables.
We have already pointed out that decision theories will differ in the form of $f$ and the measures, or moments, it utilizes and we take the stance that these choices should be informed by the theory and data of the psychological and other sciences. 
For what do these choices mean? 
We think they reflect the assumptions about the kind of information agents process and the way they do, notwithstanding the question of whether they are capable of doing so.   
In the following section, we show how different processing assumptions for DfE, outlined by Hills and Hertwig [-@hillsInformationSearchDecisions2010], can be captured by the SSM.   

### Formalizing sampling and decision policies in the SSM

148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
Hills and Hertwig [-@hillsInformationSearchDecisions2010] discussed a potential link between the sampling and decision policies of agents, i.e., a systematic relation between the pattern according to which sequences of single samples are generated and the mechanism of integrating and evaluating these sample sequences to arrive at a decision. 
Specifically, the authors suppose that frequent switching between prospects in the sampling phase translates to a round-wise decision strategy, in which the evaluation process is separated into rounds of ordinal comparisons between single samples (or small chunks thereof), such that the unit of the final evaluation are round wins rather than raw outcomes.   
In contrast, infrequent switching is supposed to translate to a decision strategy, in which only a single comparison of the summaries across all single samples of the respective prospects is conducted [see Figure 1, @hillsInformationSearchDecisions2010].
The authors assume, that these distinct sampling and decision process lead to differences in the decision behavior and may serve as an additional explanation for the many empirical protocols which indicate that DfE differ from DfD [@barronSmallFeedbackbasedDecisions2003; @weberPredictingRiskSensitivity2004; @hertwigDecisionsExperienceEffect2004; @wulffMetaanalyticReviewTwo2018, for a meta-analytic review]. 

In the following, we consider the case of choices between two prospects and show how the assumptions of Hills and Hertwig [-@hillsInformationSearchDecisions2010] on specific sampling and decision policies can be integrated into the SSM.
We will demonstrate, how this allows us to formulate testable predictions about the decision behavior that results from these processes, or, the functional forms of algebraic models commonly used to describe it. 
We denote the random variables of the two respective prospects as $X$ and $Y$. 
By definition, we require the decision variable $D$ to be a measure of the evidence for one prospect over the other.
$f$ should thus map the comparisons of $X$ and $Y$-one for the summary strategy and multiple for the round-wise strategy-to a measure space that enables us to quantify the accumulated evidence for both prospects.  
Since in Hills and Hertwig [-@hillsInformationSearchDecisions2010] accumulated evidence is described in units of won comparisons, the respective measure space contains the natural numbers $\{0, 1\}$, indicating the possible outcomes of a single comparison.
I.e., $f$ is a function that maps the possible outcomes of a comparison of quantitative measures related to $X$ and $Y$, hereafter the sampling space $S = \mathbb{R}$, to the measure space $S' = \{0,1\}$.
As the authors assume that the comparisons of prospects are based on sample means, we define $S$ as the set

$$\begin{equation}
S = \left\{\frac{\overline{X}} {\overline{Y}}\right\}^{\mathbb{N}}
\end{equation}$$

where $\mathbb{N}$ denotes the number of comparisons between prospects. 
To indicate that the comparison of prospects on the ordinal rather than the metric scale is of primary interest, we define the event space as a set of subsets of $S$ 
168

169
170
171
$$\begin{equation}
\mathscr{D} = \left\{\frac{\overline{X}}{\overline{Y}} > 0, \frac{\overline{X}}{\overline{Y}} \leq 0 \right\} 
\end{equation}$$
172

173
The decision variable $D$ is thus the measurable function 
174

175
$$\begin{equation}
176
D:= f: (S, \mathscr{D}) \mapsto (S', \mathscr{D'})
177
\end{equation}$$
linushof's avatar
linushof committed
178

179
with the concrete mapping
180

181
$$\begin{equation}
182
183
184
185
186
\left(\frac{\overline{X}} {\overline{Y}}\Bigg) \in S : D\Bigg(\frac{\overline{X}} {\overline{Y}}\right) =
  \begin{cases}
            1 & if & \frac{\overline{X}}{\overline{Y}} > 0 \in \mathscr{D} \\
            0 & if & \frac{\overline{X}}{\overline{Y}} \leq 0 \in \mathscr{D}
  \end{cases}
187
\end{equation}$$
188

189
...
190

linushof's avatar
linushof committed
191
192
# Method

linushof's avatar
linushof committed
193
## Test set
194

linushof's avatar
linushof committed
195
196
197
198
199
200
201
202
203
Under each condition, i.e., strategy-parameter combinations, all gambles are played by 100 synthetic agents.
We test a set of gambles, in which one of the prospects contains a safe outcome and the other two risky outcomes (*safe-risky gambles*).
Therefore, 60 gambles from an initial set of 10,000 are sampled.
Both outcomes and probabilities are drawn from uniform distributions, ranging from 0 to 20 for outcomes and from .01 to .99 for probabilities of the lower risky outcomes $p_{low}$.
The probabilities of the higher risky outcomes are $1-p_{low}$, respectively.
To omit dominant prospects, safe outcomes fall between both risky outcomes.
The table below contains the test set of 60 gambles.
Sampling of gambles was stratified, randomly drawing an equal number of 20 gambles with no, an attractive, and an unattractive rare outcome.
Risky outcomes are considered *"rare"* if their probability is $p < .2$ and *"attractive"* (*"unattractive"*) if they are higher (lower) than the safe outcome.
204

linushof's avatar
linushof committed
205
206
207
```{r message=FALSE}
gambles <- read_csv("data/gambles/sr_subset.csv")
gambles %>% kable()
208
209
```

linushof's avatar
linushof committed
210
## Model Parameters
211

linushof's avatar
linushof committed
212
**Switching probability** $s$ is the probability with which agents draw the following single sample from the prospect they did not get their most recent single sample from.
linushof's avatar
linushof committed
213
$s$ is varied between .1 to 1 in increments of .1.
214

linushof's avatar
linushof committed
215
The **boundary type** is either the minimum value any prospect's sample statistic must reach (absolute) or the minimum value for the difference of these statistics (relative).
linushof's avatar
linushof committed
216
Sample statistics are sums over outcomes (comprehensive strategy) and sums over wins (piecewise strategy), respectively.
217

linushof's avatar
linushof committed
218
219
For comprehensive integration, the **boundary value** $a$ is varied between 15 to 75 in increments of 15.
For piecewise integration $a$ is varied between 1 to 5 in increments of 1.
220

linushof's avatar
linushof committed
221
```{r message=FALSE}
222
223
224
225
226
227
228
229
# read choice data 
cols <- list(.default = col_double(),
             strategy = col_factor(),
             boundary = col_factor(),
             gamble = col_factor(),
             rare = col_factor(),
             agent = col_factor(),
             choice = col_factor())
linushof's avatar
linushof committed
230
choices <- read_csv("data/choices/choices.csv", col_types = cols)
231
232
```

linushof's avatar
linushof committed
233
In sum, 2 (strategies) x 60 (gambles) x 100 (agents) x 100 (parameter combinations) = `r nrow(choices)` choices are simulated.
linushof's avatar
linushof committed
234

linushof's avatar
linushof committed
235
# Results
236

linushof's avatar
linushof committed
237
238
Because we are not interested in deviations from normative choice due to sampling artifacts (e.g., ceiling effects produced by low boundaries), we remove trials in which only one prospect was attended.
In addition, we use relative frequencies of sampled outcomes rather than 'a priori' probabilities to compare actual against normative choice behavior.
239
240

```{r}
linushof's avatar
linushof committed
241
242
243
# remove choices where prospects were not attended
choices <- choices %>%
  filter(!(is.na(a_ev_exp) | is.na(b_ev_exp)))
244
245
```

linushof's avatar
linushof committed
246
247
248
249
250
```{r eval = FALSE}
# remove choices where not all outcomes were sampled
choices <- choices %>% 
  filter(!(is.na(a_ev_exp) | is.na(b_ev_exp) | a_p1_exp == 0 | a_p2_exp == 0))
```
linushof's avatar
linushof committed
251

linushof's avatar
linushof committed
252
Removing the respective trials, we are left with `r nrow(choices)` choices.
linushof's avatar
linushof committed
253

linushof's avatar
linushof committed
254
## Sample Size
linushof's avatar
linushof committed
255

linushof's avatar
linushof committed
256
257
258
259
260
261
```{r message=FALSE}
samples <- choices %>% 
  group_by(strategy, s, boundary, a) %>% 
  summarise(n_med = median(n_sample))
samples_piecewise <- samples %>% filter(strategy == "piecewise")
samples_comprehensive <- samples %>% filter(strategy == "comprehensive")
262
263
```

linushof's avatar
linushof committed
264
The median sample sizes generated by different parameter combinations ranged from `r min(samples_piecewise$n_med)` to `r max(samples_piecewise$n_med)` for piecewise integration and `r min(samples_comprehensive$n_med)` to `r max(samples_comprehensive$n_med)` for comprehensive integration.
265

linushof's avatar
linushof committed
266
### Boundary type and boundary value (a)
267

linushof's avatar
linushof committed
268
As evidence is accumulated sequentially, relative boundaries and large boundary values naturally lead to larger sample sizes, irrespective of the integration strategy.
linushof's avatar
linushof committed
269

linushof's avatar
linushof committed
270
271
```{r message=FALSE}
group_med <- samples_piecewise %>%
linushof's avatar
linushof committed
272
  group_by(boundary, a) %>% 
linushof's avatar
linushof committed
273
  summarise(group_med = median(n_med)) # to get the median across all s values
linushof's avatar
linushof committed
274

linushof's avatar
linushof committed
275
276
samples_piecewise %>%
  ggplot(aes(a, n_med, color = a)) + 
linushof's avatar
linushof committed
277
  geom_jitter(alpha = .5, size = 2) +
linushof's avatar
linushof committed
278
279
280
  geom_point(data = group_med, aes(y = group_med), size = 3) +
  facet_wrap(~boundary) + 
  scale_color_viridis() + 
281
  labs(title = "Piecewise Integration",
linushof's avatar
linushof committed
282
       x ="a", 
linushof's avatar
linushof committed
283
       y="Sample Size", 
linushof's avatar
linushof committed
284
       col="a") + 
linushof's avatar
linushof committed
285
  theme_minimal()
linushof's avatar
linushof committed
286
```
linushof's avatar
linushof committed
287

linushof's avatar
linushof committed
288
289
```{r message=FALSE}
group_med <- samples_comprehensive %>%
linushof's avatar
linushof committed
290
  group_by(boundary, a) %>% 
linushof's avatar
linushof committed
291
  summarise(group_med = median(n_med)) 
linushof's avatar
linushof committed
292

linushof's avatar
linushof committed
293
294
samples_comprehensive %>%
  ggplot(aes(a, n_med, color = a)) + 
linushof's avatar
linushof committed
295
  geom_jitter(alpha = .5, size = 2) +
linushof's avatar
linushof committed
296
297
298
  geom_point(data = group_med, aes(y = group_med), size = 3) +
  facet_wrap(~boundary) + 
  scale_color_viridis() + 
299
  labs(title = "Comprehensive Integration",
linushof's avatar
linushof committed
300
       x ="a", 
linushof's avatar
linushof committed
301
       y="Sample Size", 
linushof's avatar
linushof committed
302
       col="a") + 
linushof's avatar
linushof committed
303
  theme_minimal()
304
305
```

linushof's avatar
linushof committed
306
### Switching probability (s)
307

linushof's avatar
linushof committed
308
309
310
For piecewise integration, there is an inverse relationship between switching probability and sample size.
I.e., the lower s, the less frequent prospects are compared and thus, boundaries are only approached with larger sample sizes.
This effect is particularly pronounced for low probabilities such that the increase in sample size accelerates as switching probability decreases.
linushof's avatar
linushof committed
311

linushof's avatar
linushof committed
312
313
```{r message=FALSE}
group_med <- samples_piecewise %>%
linushof's avatar
linushof committed
314
  group_by(boundary, s) %>% 
linushof's avatar
linushof committed
315
  summarise(group_med = median(n_med)) # to get the median across all a values
linushof's avatar
linushof committed
316

linushof's avatar
linushof committed
317
318
319
320
321
322
samples_piecewise %>%
  ggplot(aes(s, n_med, color = s)) + 
  geom_jitter(alpha = .5, size = 2) +
  geom_point(data = group_med, aes(y = group_med), size = 3) +
  facet_wrap(~boundary) + 
  scale_color_viridis() + 
323
  labs(title = "Piecewise Integration",
linushof's avatar
linushof committed
324
       x ="s", 
linushof's avatar
linushof committed
325
       y="Sample Size", 
linushof's avatar
linushof committed
326
       col="s") + 
linushof's avatar
linushof committed
327
328
329
  theme_minimal()
```

linushof's avatar
linushof committed
330
331
332
For comprehensive integration, boundary types differ in the effects of switching probability.
For absolute boundaries, switching probability has no apparent effect on sample size as the distance of a given prospect to its absolute boundary is not changed by switching to (and sampling from) the other prospect.
For relative boundaries, however, samples sizes increase with switching probability.
linushof's avatar
linushof committed
333

linushof's avatar
linushof committed
334
335
```{r message=FALSE}
group_med <- samples_comprehensive %>%
linushof's avatar
linushof committed
336
  group_by(boundary, s) %>% 
linushof's avatar
linushof committed
337
  summarise(group_med = median(n_med)) # to get the median across all a values
linushof's avatar
linushof committed
338

linushof's avatar
linushof committed
339
340
341
342
343
344
samples_comprehensive %>%
  ggplot(aes(s, n_med, color = s)) + 
  geom_jitter(alpha = .5, size = 2) +
  geom_point(data = group_med, aes(y = group_med), size = 3) +
  facet_wrap(~boundary) + 
  scale_color_viridis() + 
345
  labs(title = "Comprehensive Integration",
linushof's avatar
linushof committed
346
347
348
       x ="s",
       y = "Sample Size", 
       col="s") + 
linushof's avatar
linushof committed
349
350
351
  theme_minimal()
```

linushof's avatar
linushof committed
352
## Choice Behavior
linushof's avatar
linushof committed
353

linushof's avatar
linushof committed
354
Below, in extension to Hills and Hertwig [-@hillsInformationSearchDecisions2010], the interplay of integration strategies, gamble features, and model parameters in their effects on choice behavior in general and their contribution to underweighting of rare events in particular is investigated.
linushof's avatar
linushof committed
355
356
357
358
359
360
361
362
363
364
365
366
We apply two definitions of underweighting of rare events: Considering false response rates, we define underweighting such that the rarity of an attractive (unattractive) outcome leads to choose the safe (risky) prospect although the risky (safe) prospect has a higher expected value.

```{r message=FALSE}
fr_rates <- choices %>% 
  mutate(ev_ratio_exp = round(a_ev_exp/b_ev_exp, 2), 
         norm = case_when(ev_ratio_exp > 1 ~ "A", ev_ratio_exp < 1 ~ "B")) %>% 
  filter(!is.na(norm)) %>% # exclude trials with normative indifferent options
  group_by(strategy, s, boundary, a, rare, norm, choice) %>% # group correct and incorrect responses
  summarise(n = n()) %>% # absolute numbers 
  mutate(rate = round(n/sum(n), 2), # response rates 
         type = case_when(norm == "A" & choice == "B" ~ "false safe", norm == "B" & choice == "A" ~ "false risky")) %>% 
  filter(!is.na(type)) # remove correct responses
linushof's avatar
linushof committed
367
368
```

linushof's avatar
linushof committed
369
Considering the parameters of Prelec's [-@prelecProbabilityWeightingFunction1998] implementation of the weighting function [CPT; cf. @tverskyAdvancesProspectTheory1992], underweighting is reflected by decisions weights estimated to be smaller than the corresponding objective probabilities.
linushof's avatar
linushof committed
370

linushof's avatar
linushof committed
371
### False Response Rates
linushof's avatar
linushof committed
372

linushof's avatar
linushof committed
373
374
375
```{r message=FALSE}
fr_rates_piecewise <- fr_rates %>% filter(strategy == "piecewise")
fr_rates_comprehensive <- fr_rates %>% filter(strategy == "comprehensive")
linushof's avatar
linushof committed
376
```
377

linushof's avatar
linushof committed
378
The false response rates generated by different parameter combinations ranged from `r min(fr_rates_piecewise$rate)` to `r max(fr_rates_piecewise$rate)` for piecewise integration and from `r min(fr_rates_comprehensive$rate)` to `r max(fr_rates_comprehensive$rate)` for comprehensive integration.
linushof's avatar
linushof committed
379
However, false response rates vary considerably as a function of rare events, indicating that their presence and attractiveness are large determinants of false response rates.
linushof's avatar
linushof committed
380

linushof's avatar
linushof committed
381
382
383
384
385
386
```{r message=FALSE}
fr_rates %>% 
  group_by(strategy, boundary, rare) %>% 
  summarise(min = min(rate),
            max = max(rate)) %>% 
  kable()
linushof's avatar
linushof committed
387
388
```

linushof's avatar
linushof committed
389
The heatmaps below show the false response rates for all strategy-parameter combinations.
linushof's avatar
linushof committed
390
391
Consistent with our - somewhat rough - definition of underweighting, the rate of false risky responses is generally higher, if the unattractive outcome of the risky prospect is rare (top panel).
Conversely, if the attractive outcome of the risky prospect is rare, the rate of false safe responses is generally higher (bottom panel).
linushof's avatar
linushof committed
392
As indicated by the larger range of false response rates, the effects of rare events are considerably larger for piecewise integration.
393

linushof's avatar
linushof committed
394
395
396
397
398
399
400
401
402
403
404
405
406
407
```{r message=FALSE}
fr_rates %>% 
  filter(strategy == "piecewise", boundary == "absolute") %>% 
  ggplot(aes(a, s, fill = rate)) + 
  facet_grid(type ~ fct_relevel(rare, "attractive", "none", "unattractive"), switch = "y") +
  geom_tile(colour="white", size=0.25) + 
  scale_x_continuous(expand=c(0,0), breaks = seq(1, 5, 1)) +
  scale_y_continuous(expand=c(0,0), breaks = seq(.1, 1, .1)) +
  scale_fill_viridis() + 
  labs(title = "Piecewise Integration | Absolute Boundary",
       x = "a", 
       y= "s", 
       fill = "% False Responses") + 
  theme_minimal() 
408
409
```

linushof's avatar
linushof committed
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
```{r message=FALSE}
fr_rates %>% 
  filter(strategy == "piecewise", boundary == "relative") %>% 
  ggplot(aes(a, s, fill = rate)) + 
  facet_grid(type ~ fct_relevel(rare, "attractive", "none", "unattractive"), switch = "y") +
  geom_tile(colour="white", size=0.25) + 
  scale_x_continuous(expand=c(0,0), breaks = seq(1, 5, 1)) +
  scale_y_continuous(expand=c(0,0), breaks = seq(.1, 1, .1)) +
  scale_fill_viridis() + 
  labs(title = "Piecewise Integration | Relative Boundary",
       x = "a", 
       y= "s", 
       fill = "% False Responses") + 
  theme_minimal() 
```
linushof's avatar
linushof committed
425

linushof's avatar
linushof committed
426
427
```{r message=FALSE}
fr_rates %>% 
linushof's avatar
linushof committed
428
  filter(strategy == "comprehensive", boundary == "absolute") %>% 
linushof's avatar
linushof committed
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
  ggplot(aes(a, s, fill = rate)) + 
  facet_grid(type ~ fct_relevel(rare, "attractive", "none", "unattractive"), switch = "y") +
  geom_tile(colour="white", size=0.25) + 
  scale_x_continuous(expand=c(0,0), breaks = seq(15, 75, 15)) +
  scale_y_continuous(expand=c(0,0), breaks = seq(.1, 1, .1)) +
  scale_fill_viridis() + 
  labs(title = "Comprehensive Integration | Absolute Boundary",
       x = "a", 
       y= "s", 
       fill = "% False Responses") + 
  theme_minimal() 
```

```{r message=FALSE}
fr_rates %>% 
linushof's avatar
linushof committed
444
  filter(strategy == "comprehensive", boundary == "relative") %>% 
linushof's avatar
linushof committed
445
446
447
448
449
450
451
452
453
454
455
  ggplot(aes(a, s, fill = rate)) + 
  facet_grid(type ~ fct_relevel(rare, "attractive", "none", "unattractive"), switch = "y") +
  geom_tile(colour="white", size=0.25) + 
  scale_x_continuous(expand=c(0,0), breaks = seq(15, 75, 15)) +
  scale_y_continuous(expand=c(0,0), breaks = seq(.1, 1, .1)) +
  scale_fill_viridis() + 
  labs(title = "Comprehensive Integration | Relative Boundary",
       x = "a", 
       y= "s", 
       fill = "% False Responses") + 
  theme_minimal() 
456
457
```

linushof's avatar
linushof committed
458
#### Switching Probability (s) and Boundary Value (a)
linushof's avatar
linushof committed
459

linushof's avatar
linushof committed
460
As for both piecewise and comprehensive integration the differences between boundary types are rather minor and of magnitude than of qualitative pattern, the remaining analyses of false response rates are summarized across absolute and relative boundaries.
linushof's avatar
linushof committed
461

linushof's avatar
linushof committed
462
Below, the $s$ and $a$ parameter are considered as additional sources of variation in the false response pattern above and beyond the interplay of integration strategies and the rarity and attractiveness of outcomes.
linushof's avatar
linushof committed
463

linushof's avatar
linushof committed
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
```{r message=FALSE}
fr_rates %>% 
  filter(strategy == "piecewise") %>% 
  ggplot(aes(s, rate, color = a)) + 
  facet_grid(type ~ fct_relevel(rare, "attractive", "none", "unattractive"), switch = "y") +
  geom_jitter(size = 2) + 
  scale_x_continuous(breaks = seq(0, 1, .1)) +
  scale_y_continuous(breaks = seq(0, 1, .1)) +
  scale_color_viridis() + 
  labs(title = "Piecewise Integration",
       x = "s", 
       y= "% False Responses", 
       color = "a") + 
  theme_minimal() 
```
479

linushof's avatar
linushof committed
480
481
```{r message=FALSE}
fr_rates %>% 
linushof's avatar
linushof committed
482
  filter(strategy == "comprehensive") %>% 
linushof's avatar
linushof committed
483
484
485
486
487
488
  ggplot(aes(s, rate, color = a)) + 
  facet_grid(type ~ fct_relevel(rare, "attractive", "none", "unattractive"), switch = "y") +
  geom_jitter(size = 2) + 
  scale_x_continuous(breaks = seq(0, 1, .1)) +
  scale_y_continuous(breaks = seq(0, 1, .1)) +
  scale_color_viridis() + 
489
  labs(title = "Comprehensive Integration",
linushof's avatar
linushof committed
490
491
492
493
       x = "s", 
       y= "% False Responses", 
       color = "a") + 
  theme_minimal() 
494
495
```

linushof's avatar
linushof committed
496
For piecewise integration, switching probability is naturally related to the size of the samples on which the round-wise comparisons of prospects are based on, with low values of $s$ indicating large samples and vice versa.
linushof's avatar
linushof committed
497
Accordingly, switching probability is positively related to false response rates.
linushof's avatar
linushof committed
498
499
I.e., the larger the switching probability, the smaller the round-wise sample size and the probability of experiencing a rare event within a given round.
Because round-wise comparisons are independent of each other and binomial distributions within a given round are skewed for small samples and outcome probabilities [@kolmogorovFoundationsTheoryProbability1950], increasing boundary values do not reverse but rather amplify this relation.
500

linushof's avatar
linushof committed
501
502
503
For comprehensive integration, switching probability is negatively related to false response rates, i.e., an increase in $s$ is associated with decreasing false response rates.
This relation, however, may be the result of an artificial interaction between the $s$ and $a$ parameter.
Precisely, in the current algorithmic implementation of sampling with a comprehensive integration mechanism, decreasing switching probabilities cause comparisons of prospects based on increasingly unequal sample sizes immediately after switching prospects.
linushof's avatar
linushof committed
504
Consequentially, reaching (low) boundaries is rather a function of switching probability and associated sample sizes than of actual evidence for a given prospect over the other.
505

linushof's avatar
linushof committed
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
### Cumulative Prospect Theory

In the following, we examine the possible relations between the parameters of the *choice-generating* sampling models and the *choice-describing* cumulative prospect theory.

For each distinct strategy-parameter combination, we ran 20 chains of 40,000 iterations each, after a warm-up period of 1000 samples.
To reduce potential autocorrelation during the sampling process, we only kept every 20th sample (thinning).

```{r}
# read CPT data
cols <- list(.default = col_double(),
             strategy = col_factor(),
             boundary = col_factor(),
             parameter = col_factor())
estimates <- read_csv("data/estimates/estimates_cpt_pooled.csv", col_types = cols)
```

#### Convergence

```{r}
gel_92 <- max(estimates$Rhat) # get largest scale reduction factor (Gelman & Rubin, 1992) 
```

The potential scale reduction factor $\hat{R}$ was $n \leq$ `r round(gel_92, 3)` for all estimates, indicating good convergence.

530
#### Piecewise Integration
linushof's avatar
linushof committed
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564

```{r}
# generate subset of all strategy-parameter combinations (rows) and their parameters (columns)
curves_cpt <- estimates %>% 
  select(strategy, s, boundary, a, parameter, mean) %>% 
  pivot_wider(names_from = parameter, values_from = mean)
```

##### Weighting function w(p)

We start by plotting the weighting curves for all parameter combinations under piecewise integration.

```{r}

cpt_curves_piecewise <- curves_cpt %>% 
  filter(strategy == "piecewise") %>% 
  expand_grid(p = seq(0, 1, .1)) %>% # add vector of objective probabilities
  mutate(w = round(exp(-delta*(-log(p))^gamma), 2)) # compute decision weights (cf. Prelec, 1998)

# all strategy-parameter combinations 

cpt_curves_piecewise %>% 
  ggplot(aes(p, w)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Piecewise Integration: Weighting functions",
       x = "p", 
       y= "w(p)") + 
  theme_minimal() 
```

```{r}
cpt_curves_piecewise %>% 
  ggplot(aes(p, w)) + 
565
  geom_path() +
linushof's avatar
linushof committed
566
567
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  facet_wrap(~a) + 
568
569
570
571
572
  labs(title = "Piecewise Integration: Weighting functions",
       x = "p",
       y= "w(p)",
       color = "Switching Probability") + 
  scale_color_viridis() +
linushof's avatar
linushof committed
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
  theme_minimal() 
```

```{r}
cpt_curves_piecewise %>% 
  ggplot(aes(p, w, color = s)) + 
  geom_path() +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Piecewise Integration: Weighting functions",
       x = "p", 
       y= "w(p)", 
       color = "Switching Probability") + 
  scale_color_viridis() +
  theme_minimal() 
```

```{r}
cpt_curves_piecewise %>% 
  ggplot(aes(p, w, color = s)) + 
  geom_path() +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  facet_wrap(~a) + 
  labs(title = "Piecewise Integration: Weighting functions",
       x = "p",
       y= "w(p)",
       color = "Switching Probability") + 
  scale_color_viridis() +
  theme_minimal() 
```

##### Value function v(x)

```{r}

cpt_curves_piecewise <- curves_cpt %>% 
  filter(strategy == "piecewise") %>% 
  expand_grid(x = seq(0, 20, 2)) %>% # add vector of objective outcomes
  mutate(v = round(x^alpha, 2)) # compute decision weights (cf. Prelec, 1998)

# all strategy-parameter combinations 

cpt_curves_piecewise %>% 
  ggplot(aes(x, v)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Piecewise Integration: Value functions",
       x = "p", 
       y= "w(p)") + 
  theme_minimal() 
```

```{r}
cpt_curves_piecewise %>% 
  ggplot(aes(x, v, color = s)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Piecewise Integration: Value functions",
       x = "p", 
       y= "w(p)") + 
  scale_color_viridis() + 
  theme_minimal() 
```

```{r}
cpt_curves_piecewise %>% 
  ggplot(aes(x, v, color = s)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  facet_wrap(~a) + 
  labs(title = "Piecewise Integration: Value functions",
       x = "p", 
       y= "w(p)") + 
  scale_color_viridis() + 
  theme_minimal() 
```

649
#### Comprehensive Integration
linushof's avatar
linushof committed
650
651
652
653

##### Weighting function w(p)

We start by plotting the weighting curves for all parameter combinations under piecewise integration.
linushof's avatar
linushof committed
654

linushof's avatar
linushof committed
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
```{r}

cpt_curves_comprehensive <- curves_cpt %>% 
  filter(strategy == "comprehensive") %>% 
  expand_grid(p = seq(0, 1, .1)) %>% # add vector of objective probabilities
  mutate(w = round(exp(-delta*(-log(p))^gamma), 2)) # compute decision weights (cf. Prelec, 1998)

# all strategy-parameter combinations 

cpt_curves_comprehensive %>% 
  ggplot(aes(p, w)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Comprehensive Integration: Weighting functions",
       x = "p", 
       y= "w(p)") + 
  theme_minimal() 
```

```{r}
cpt_curves_comprehensive %>% 
  ggplot(aes(p, w)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Comprehensive Integration: Weighting functions",
       x = "p", 
681
682
       y= "w(p)") + 
  facet_wrap(~a) + 
linushof's avatar
linushof committed
683
684
685
686
687
  theme_minimal() 
```

```{r}
cpt_curves_comprehensive %>% 
688
689
  ggplot(aes(p, w, color = s)) + 
  geom_path() +
linushof's avatar
linushof committed
690
691
692
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Comprehensive Integration: Weighting functions",
       x = "p", 
693
694
695
       y= "w(p)", 
       color = "Switching Probability") + 
  scale_color_viridis() +
linushof's avatar
linushof committed
696
697
698
699
700
701
702
703
  theme_minimal() 
```

```{r}
cpt_curves_comprehensive %>% 
  ggplot(aes(p, w, color = s)) + 
  geom_path() +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
704
  facet_wrap(~a) + 
linushof's avatar
linushof committed
705
  labs(title = "Comprehensive Integration: Weighting functions",
706
707
       x = "p",
       y= "w(p)",
linushof's avatar
linushof committed
708
709
710
711
712
713
714
       color = "Switching Probability") + 
  scale_color_viridis() +
  theme_minimal() 
```

```{r}
cpt_curves_comprehensive %>% 
715
  filter(s >= .7) %>% 
linushof's avatar
linushof committed
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
  ggplot(aes(p, w, color = s)) + 
  geom_path() +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  facet_wrap(~a) + 
  labs(title = "Comprehensive Integration: Weighting functions",
       x = "p",
       y= "w(p)",
       color = "Switching Probability") + 
  scale_color_viridis() +
  theme_minimal() 
```

##### Value function v(x)

```{r}

cpt_curves_comprehensive <- curves_cpt %>% 
  filter(strategy == "comprehensive") %>% 
  expand_grid(x = seq(0, 20, 2)) %>% # add vector of objective outcomes
  mutate(v = round(x^alpha, 2)) # compute decision weights (cf. Prelec, 1998)


# all strategy-parameter combinations 

cpt_curves_comprehensive %>% 
  ggplot(aes(x, v)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Comprehensive Integration: Value functions",
       x = "p", 
       y= "w(p)") + 
  theme_minimal() 
```

```{r}
cpt_curves_comprehensive %>% 
  ggplot(aes(x, v)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  facet_wrap(~a) + 
  labs(title = "Comprehensive Integration: Value functions",
       x = "p", 
       y= "w(p)") + 
  theme_minimal() 
```

```{r}
cpt_curves_comprehensive %>% 
  ggplot(aes(x, v, color = s)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  labs(title = "Comprehensive Integration: Value functions",
       x = "p", 
       y= "w(p)") + 
  scale_color_viridis() + 
  theme_minimal() 
```

```{r}
cpt_curves_comprehensive %>% 
  ggplot(aes(x, v, color = s)) + 
  geom_path(size = .5) +
  geom_abline(intercept = 0, slope = 1, color = "red", size = 1) +
  facet_wrap(~a) + 
  labs(title = "Comprehensive Integration: Value functions",
       x = "p", 
       y= "w(p)") + 
  scale_color_viridis() + 
  theme_minimal() 
```
linushof's avatar
linushof committed
786

787
788
789
790
# Discussion 

# Conclusion

linushof's avatar
linushof committed
791
# References