Impaired adaptation of learning to contingency volatility in internalizing psychopathology

Christopher Gagne, Ondrej Zika, Peter Dayan, Sonia J Bishop, Christopher Gagne, Ondrej Zika, Peter Dayan, Sonia J Bishop

Abstract

Using a contingency volatility manipulation, we tested the hypothesis that difficulty adapting probabilistic decision-making to second-order uncertainty might reflect a core deficit that cuts across anxiety and depression and holds regardless of whether outcomes are aversive or involve reward gain or loss. We used bifactor modeling of internalizing symptoms to separate symptom variance common to both anxiety and depression from that unique to each. Across two experiments, we modeled performance on a probabilistic decision-making under volatility task using a hierarchical Bayesian framework. Elevated scores on the common internalizing factor, with high loadings across anxiety and depression items, were linked to impoverished adjustment of learning to volatility regardless of whether outcomes involved reward gain, electrical stimulation, or reward loss. In particular, high common factor scores were linked to dampened learning following better-than-expected outcomes in volatile environments. No such relationships were observed for anxiety- or depression-specific symptom factors.

Keywords: anxiety; computational psychiatry; decision making; depression; human; neuroscience; reinforcement learning; uncertainty.

Conflict of interest statement

CG, OZ, PD, SB No competing interests declared

© 2020, Gagne et al.

Figures

Figure 1.. Bifactor analysis of internalizing symptoms.
Figure 1.. Bifactor analysis of internalizing symptoms.
(a-b) Bifactor analysis of item-level scores from the STAI, BDI, MASQ, PSWQ, CESD, and EPQ-N (128 items in total) revealed a general ‘negative affect’ factor (xaxis) and two specific factors: one depression-specific (left panel, y-axis) and one anxiety-specific (right panel, y-axis). The initial bifactor analysis was conducted in a sample (n = 86) comprising participants diagnosed with MDD, participants diagnosed with GAD, healthy control participants and unselected community participants. The factor solution showed a good fit in a separate sample of participants (n = 199) recruited and tested online through UC Berkeley’s participant pool (x). Item loadings on a sub-set of questionnaires were used to calculate factor scores for a third set of participants recruited and tested online through Amazon’s Mechanical Turk (n = 147), see Experiment 2. It can be seen that both online samples show a good range of scores across the general and two specific factors that encompass the scores shown by patients with GAD and MDD. (c) Factor scores were correlated with summary scores for questionnaire scales and subscales to assess the construct validity of the latent factors. This was conducted using a combined dataset comprising data from both the exploratory (n = 86) and confirmatory (n = 199) factor analyses. Scores on the general factor correlated highly with all questionnaire summary scores, scores on the depression-specific factor correlated highly with measures of depression, especially anhedonic depression, and scores on the anxiety-specific factor correlated particularly highly with scores for the PSWQ. MASQAD = Mood and Anxiety Symptoms Questionnaire (anhedonic depression subscale); BDI = Beck Depression Inventory; CESD = Center for Epidemiologic Studies Depression Scale; STAIdep = Spielberger State-Trait Anxiety Inventory (depression subscale); STAIanx = Spielberger State-Trait Anxiety Inventory (anxiety subscale); EQN-N = Eysenck Personality Questionnaire (Neuroticism subscale); PSWQ = Penn State Worry Questionnaire; MASQAA = Mood and Anxiety Symptoms Questionnaire (anxious arousal subscale); MDD = major depressive disorder; GAD = generalized anxiety disorder.
Figure 1—figure supplement 1.. Scree plot for…
Figure 1—figure supplement 1.. Scree plot for the eigenvalue decomposition of the covariance matrix of individual items from the battery of internalizing symptom measures.
Bifactor analysis was applied to the item-level scores from the STAI, BDI, MASQ, PSWQ, CESD, and EPQ-N (128 items in total) for participants in experiment 1. Prior to the estimation of the bifactor model, eigenvalue decomposition of the covariance matrix of individual items was used to inform the decision of how many factors to include in the model. A ‘scree’ plot for the eigenvalue decomposition of the covariance matrix is shown. The location of the elbow in this plot, along with the parallel analysis, which compares the sequence of eigenvalues from the data to their corresponding eigenvalues from a random normal matrix of equivalent size (Horn, 1965; Humphreys and Montanelli, 1975; Floyd and Widaman, 1995), suggests that there were three dimensions of symptom variation that were distinguishable from noise. STAI = Spielberger State-Trait Anxiety Inventory; BDI = Beck Depression Inventory; MASQ = Mood and Anxiety Symptoms Questionnaire; PSWQ = Penn State Worry Questionnaire; CESD = Center for Epidemiologic Studies Depression Scale; EQN-N = Eysenck Personality Questionnaire (Neuroticism subscale); MDD = major depressive disorder; GAD = generalized anxiety disorder.
Figure 1—figure supplement 2.. Correlation matrices for…
Figure 1—figure supplement 2.. Correlation matrices for internalizing questionnaire scales and latent factors from the bifactor analysis.
(a) We calculated summary scores for each participant for each of the standardized questionnaires we administered. All participants who completed the full set of questionnaires were included: this comprised both participants in experiment 1 (n = 86) and participants whose data was used for the confirmatory factor analysis (n = 199). Full scale scores were used for the Beck Depression Inventory (BDI), the Center for Epidemiologic Studies Depression Scale (CESD), and the Penn State Worry Questionnaire (PSWQ), and subscale scores were used for the Spielberger Trait Anxiety Inventory (anxiety and depression subscales: STAIanx, STAIdep), the Eysenck Personality Questionnaire (neuroticism subscale: EPQ-N) and the Mood and Anxiety Symptoms Questionnaire (anhedonic depression and anxious arousal subscales: MASQ-AD, MASQ-AA). Participants’ scores for these measures were correlated with their scores for the three latent factors (G = general factor; F1 = depression-specific factor; F2 = anxiety-specific factor). The resultant correlations are given in panel (a) (see Materials and methods for further details). (b) Here, we have regressed variance explainable by general factor scores out from each (sub)scale and participants’ residual scores are correlated against their latent factor scores as well as their scores on the questionnaire scales without variance explained by the general factor removed. The magnitude of correlation between participants’ residual scores for a given scale and their original, non-residualized, scores for that scale reveals how much variance in scores for that measure cannot be explained by the general factor. These correlations were highest for the MASQ-anhedonia subscale (MASQ-AD; r = 0.74) and the Penn-State Worry Questionnaire (PSWQ; r = 0.78), with approximately 50% of score variance (the square root of the correlation) not being captured by the general factor. Nearly all this residual variance could be explained by scores on the specific factors. MASQ-AD residual scores (MASQAD_residG) and scores on the depression specific factor (F1) showed a correlation of r = 0.95; PSWQ residual scores (PWSQ_residG) and scores on the anxiety-specific factor (F2) showed a correlation of r = 0.96. These correlational results speak to the content validity of the three latent factors with the general factor explaining variance across both measures of anxiety and depression, the anxiety-specific factor capturing additional variance unique to anxiety measures and the depression-specific factor capturing additional variance unique to depression measures.
Figure 2.. Task.
Figure 2.. Task.
(a) On each trial, participants chose between two shapes. One of the two shapes led to receipt of shock or reward on each trial, the nature of the outcome depending on the version of the task. The magnitude of the potential outcome was shown as a number inside each shape and corresponded to the size of the reward in the reward version of the task or intensity of the electric shock in the aversive version of the task. (b) Within each task, trials were organized into two 90-trial blocks. During the stable block, one shape had a 75% probability of resulting in reward or shock receipt; the other shape resulted in shock or reward receipt on the remaining trials. During the volatile block, the shape with the higher probability (80%) of resulting in shock or reward receipt switched every 20 trials. Participants were instructed to consider the magnitude of the potential outcome, shown as a number inside each shape, as well as the probability that the outcome would occur if the shape was chosen.
Figure 3.. Cross-group results from experiment 1…
Figure 3.. Cross-group results from experiment 1 for effects of block type (volatile, stable), task version (reward, aversive), and relative outcome value (good, bad) on learning rate (n = 86).
(a) This panel shows the posterior means along with the 95% highest posterior density intervals (HDI) for the group means (μs) for each learning rate component (i.e. for baseline learning rate and the change in learning rate as a function of each within-subject factor and their two-way interactions). The 95% posterior intervals excluded zero for effect of block type upon learning rate (i.e. difference in learning rate for the volatile versus stable task blocks αvolatile−stable). This was also true for the effect of task version, that is, whether outcomes entailed reward gain or electrical stimulation (αreward−aversive) and for the effect of relative outcome value, that is, whether learning followed a relatively good (reward or no stimulation) or relatively bad (stimulation or no reward) outcome (αgood−bad). Participants showed higher learning rates during the volatile block than the stable block, during the aversive task than the reward task, and on trials following good versus bad outcomes. None of the two-way interactions were statistically credible, that is the 95% posterior included zero. (b) In this panel, the learning rate components are combined to illustrate how learning rates changed across conditions. The posterior mean learning rate for individual participants (small dots) and the group posterior mean learning rate (large dots, error bars represent the associated posterior standard deviation) are given for each of the eight conditions; these values were calculated from the posterior distributions of the learning rate components (αbaseline, αvolatile−stable,  etc.) and the group means (μs).
Figure 4.. Experiment 1: Effect of general…
Figure 4.. Experiment 1: Effect of general factor scores on learning rate (in-lab sample, n = 86).
Panel (a) shows posterior means and 95% highest posterior density intervals (HDI) for the effect of general factor scores (βg) on each of the learning rate components. General factor scores credibly modulated the extent to which learning rate varied between the stable and volatile task blocks (αvolatile−stable; βg = −0.18, 95%-HDI = [−0.32,–0.05]), the effect of relative outcome value on learning rate (αgood−bad; βg = −0.21, 95%-HDI = [−0.37,–0.04]) and the interaction of these factors upon learning rate (α(good−bad)x(volatile−stable);  βg = −0.19, 95%-HDI = [−0.3,–0.1]). In each case, the 95% HDI did not include 0. (b) Here, we illustrate learning rate as a function of each within-subject factor and high versus low scores on the general factor of internalizing symptoms. To do this, we calculated the expected learning rate for each within-subject condition associated with scores one standard deviation above (‘high’, shown in red) or below (‘low’, shown in blue) the mean on the general factor. It can be seen that the largest difference in learning rates for participants with high versus low general factor scores is on trials following good outcomes during volatile task blocks. This effect is observed across both reward and aversive task versions. Small data points represent posterior mean parameter estimates for individual participants. Large points represent the posterior mean learning rates expected for participants with scores ± 1 standard deviations above or below the mean on the general factor. Error bars represent the posterior standard deviation for these expected learning rates.
Figure 4—figure supplement 1.. Effect of depression-specific…
Figure 4—figure supplement 1.. Effect of depression-specific and anxiety-specific factors on learning rate and its components (data from experiment 1).
Panel (a) and panel (b) show the effects of depression-specific factor scores and anxiety-specific factor scores, respectively, on each of the learning rate components (e.g. αvolatile−stable). The 95%-HDI’s for population-level parameters βd and βa included zero for all parameter components. This indicates that neither depression-specific nor anxiety-specific factor scores credibly modulated learning rate as a function of block type (volatile versus stable), task version (reward versus aversive), or relative outcome value (good versus bad), or as a function of any of the two-way interactions of these experimental factors.
Figure 5.. Experiment 2: Effect of the…
Figure 5.. Experiment 2: Effect of the general factor scores on learning rate (online sample, n = 147).
Panel (a) shows posterior means and 95% highest posterior density intervals (HDI) for the effect of general factor scores (βg) on each of the learning rate components. Replicating findings from experiment 1, general factor scores credibly modulated the extent to which learning rate varied between the stable and volatile task block (αvolatile−stable;βg = −0.16, 95%-HDI = [−0.32,–0.01]) and the extent to which this in turn varied as a function of relative outcome value (α(good−bad)x(volatile−stable);βg = −0.14, 95%-HDI = [−0.27,–0.02]). (b) Here, we illustrate learning rate as a function of each within-subject condition and high (+1 standard deviation, shown in red) versus low (−1 standard deviation, shown in blue) scores on the general factor. As in experiment 1, participants will low general factor scores showed a boost in learning under volatile conditions following receipt of outcomes of good relative value (reward gain or no reward loss). Once again, this boost is not evident in participants with high general factor scores. Small data points represent posterior mean parameter estimates for individual participants. Large points represent the posterior mean learning rates expected for participants with scores ± 1 standard deviations above or below the mean on the general factor. Error bars represent the posterior standard deviation for these expected learning rates. As in experiment 1, there were no cross-group effect of task version (gain - loss) and no effect of general factor scores, or of anxiety- or depression- specific factor scores, on learning components involving task version (gain - loss). As in experiment 1, baseline rates of learning were highly variable. Results for anxiety and depression specific factor scores are shown in Figure 5—figure supplement 2.
Figure 5—figure supplement 1.. Cross-group results from…
Figure 5—figure supplement 1.. Cross-group results from experiment 2 for effects of block type (volatile, stable), task version (reward, aversive), and relative outcome value (good, bad) on learning rate.
(a) This panel shows the posterior means along with the 95% highest posterior density intervals (HDI) for the group means (μs) for each learning rate component (i.e. for baseline learning rate and the change in learning rate as a function of each factor and their two-way interactions). The 95% posterior interval excluded zero for effect of relative outcome value (good - bad) upon learning rate. In contrast to experiment 1, there was no credible effect of block type upon learning rate (αvolatile−stable) or of task version (αreward−aversive). None of the two-way interactions were credible. (b) In this panel, the learning rate components are combined to illustrate how learning rates changed across conditions. The posterior mean learning rate for individual participants (small dots) and the group posterior mean learning rate (large dots, error bars represent the associated posterior standard deviation) are given for each of the eight conditions; these values were calculated from the posterior distributions of the learning rate components (αbaseline, αvolatile−stable,  etc.) and the group means (μs).
Figure 5—figure supplement 2.. Effect of depression-specific…
Figure 5—figure supplement 2.. Effect of depression-specific and anxiety-specific factors on learning rate and its components (data from experiment 2).
Panel (a) and panel (b) show the effects of depression-specific factor scores and anxiety-specific factor scores, respectively, on each of the learning rate components (e.g. αvolatile−stable). The 95%-HDI’s for population-level parameters βd and  βa included zero for all parameter components. This indicates that, as observed in experiment 1, neither depression-specific nor anxiety-specific factor scores credibly modulated learning rate as a function of block type (volatile versus stable), task version (reward versus aversive), or relative outcome value (good versus bad), or as a function of any of the two-way interactions of these experimental factors.
Appendix 4—figure 1.. Recovery of individual-level learning…
Appendix 4—figure 1.. Recovery of individual-level learning rate parameters.
Posterior mean parameter estimates for participants from experiment 1 were used to simulate new choice data from the winning model (#11). The model was then re-fit to each of these simulated datasets. The original parameters estimated from the actual dataset (referred to as ‘ground truth’ parameters) were correlated with the newly estimated parameters (referred to as ‘recovered’ parameters) for each simulated dataset. An example dataset is shown here. Each panel shows the ground truth and recovered posterior means for a separate component of the composite learning rate parameter. The x-axis corresponds to original ‘ground truth’ parameter values and the y-axis corresponds to the recovered parameter values; each datapoint represents an individual participant. The average correlation between ground truth and recovered parameters values for learning rate components, across 10 simulated datasets, was r = 0.88 (std = 0.13).
Appendix 4—figure 2.. Recovery of other model…
Appendix 4—figure 2.. Recovery of other model parameters.
Posterior mean parameter estimates for participants from experiment 1 were used to simulate new choice data from the winning model (#11). As in Appendix 4—figure 1, we show the results of parameter recovery for one example simulated dataset; here, we present data for parameters other than learning rate. Each panel shows the ground truth and recovered posterior means for a separate model parameter component. The x-axis corresponds to original ‘ground truth’ parameter values and the y-axis corresponds to the recovered parameter values; each datapoint represents an individual participant. The average correlation between ground truth and recovered parameters values across the 10 datasets for other (non-learning rate) parameters was r = 0.76 (std = 0.15).
Appendix 4—figure 3.. Variability of population-level learning…
Appendix 4—figure 3.. Variability of population-level learning rate parameters across simulated datasets.
The robustness of the estimates for the population-level parameters (μ,  βg, βa, βd) was explored by examining the variability in parameter values across the 10 simulated datasets (blue data points). Population-level parameters corresponding to learning rate components are shown in this figure. The simulated datasets used for this analysis were the same as those used for Appendix 4—figure 1 and Appendix 4—figure 2. Since each of the 10 simulated datasets uses the same ground-truth parameters for generating data (black data points), differences across these datasets reflect an estimate for the amount of noise in participants’ choices; this choice noisiness is captured by the two inverse temperatures in the model. Consistency across datasets and proximity to the ground-truth parameters indicates a robustness to this type of noise.
Appendix 4—figure 4.. Variability of population-level parameters…
Appendix 4—figure 4.. Variability of population-level parameters across simulated datasets for other parameters.
The robustness of the estimates for the population-level parameters (μ, βg, βa, βd) was explored by examining the variability in parameter values across the 10 simulated datasets (blue data points). Population-level parameters corresponding to all other parameter components aside from those for learning rate are shown in this figure. The simulated datasets used for this analysis were the same as those used for Appendix 4—figure 1 and Appendix 4—figure 2. Since each of the 10 simulated datasets uses the same ground-truth parameters for generating data (black data points), differences across these datasets reflect an estimate for the amount of noise in participants’ choices; this choice noisiness is captured by the two inverse temperatures in the model. Consistency across datasets and proximity to the ground-truth parameters indicates a robustness to this type of noise. Apart from the baseline component of each parameter, simulated parameter component ranges are relatively narrow and predominantly encompass the parameter values estimated from the actual dataset (black data points).
Appendix 4—figure 5.. Recovery of individual-level learning…
Appendix 4—figure 5.. Recovery of individual-level learning rate parameters in the thee-way interaction model.
In experiment 1, we additionally fit a model that included the three-way interaction of block type (volatile, stable), relative outcome value (good, bad), and task version (reward, aversive) for learning rate. This model was identical to the winning model (#11) except for the inclusion of the three-way interaction and was used to confirm that the relationship between general factor scores and the interaction of block type by relative outcome value on learning rate did not vary as a function of task version. Posterior means for each participants’ model parameters were used to simulate new choice data from the model. The model was then re-fit to each of these simulated datasets. The original parameters estimated from the actual dataset (referred to as ‘ground truth’ parameters) were correlated with the newly estimated parameters (referred to as ‘recovered’ parameters) for each simulated dataset. An example dataset is shown here. Each panel shows the ground truth and recovered posterior means for a separate component of the composite learning rate parameter. The x-axis corresponds to original ground truth parameter values and the y-axis corresponds to the recovered parameter values; each datapoint represents an individual participant. The high correlation between the ground truth and recovered parameters was high, even for the triple interaction (bottom right panel), indicating good parameter recoverability. The average correlation between ground truth and recovered parameters values for this triple interaction, across 10 simulated datasets, was r = 0.86 (std = 0.10).
Appendix 5—figure 1.. Comparison of task performance…
Appendix 5—figure 1.. Comparison of task performance between experiment 1 and experiment 2.
The four panels depict the performance of participants in each block (stable left column; volatile right column) and in each task (reward top row; punishment bottom row). Data from experiment 1 is shown in blue; data from experiment 2 is shown in orange. To assess performance, the magnitudes of outcomes received were averaged across trials. Higher average magnitudes for the reward condition indicates better performance. Higher average magnitudes for the loss and shock outcomes indicates worse performance.
Appendix 6—figure 1.. Learning rate parameters for…
Appendix 6—figure 1.. Learning rate parameters for experiment 1 data as estimated using alternate population-level parameters for specific effects of anxiety and depression.
Two alternative models were fit to the behavioral data from experiment 1, in addition to the main bifactor model. For the first alternative model, population-level parameters entered comprised scores on the general factor and residual scores on the MASQ anhedonia subscale and the Penn-State Worry Questionnaire (PSWQ). These residual scores were created by removing variance from scores for the MASQ and PSWQ explainable by scores for the general factor; as such these scores provide alternative depression-specific and anxiety-specific symptom measures. This model is abbreviated as ‘general + MASQADrG + PSWQrG’. For the second alternative model, residual PSWQ scores were replaced by residual scores for the MASQ anxious arousal subscale. This enables us to investigate whether anxiety-related symptoms uniquely captured by the MASQ-AA influence learning rate. This model is abbreviated as ‘general + MASQADrG + MASQAArG’. The main model is labeled simply as ‘bifactor model’. Both alternative models yielded general factor learning rate effects that were consistent with the main model (panel b). No additional effects were observed for the depression or anxiety subscales (panels c-d).
Appendix 6—figure 2.. Learning rate parameters for…
Appendix 6—figure 2.. Learning rate parameters for experiment 2 data as estimated using alternate population-level parameters for specific effects of anxiety and depression.
In addition to the main bifactor model, an additional alternative model was also fit to the behavioral data from experiment 2. In this model (the second alternate model described in Appendix 6—figure 1), population-level parameters entered comprised scores on the general factor and residual scores on the MASQ anhedonia subscale and the MASQ anxious arousal subscale (having regressed out variance explainable by general factor scores). This model is abbreviated as ‘general + MASQADrG + MASQAArG’. As in Appendix 6—figure 1, the main model is labeled simply as ‘bifactor model’. The alternative model yielded general factor learning rate effects that were consistent with the main model (panel b). No additional effects were observed for residual scores on the MASQ-AD subscale (panel c). Elevated residual scores on the MASQ-AA subscale were linked to increased learning after outcomes of relative positive value (good - bad) but did not modulate adaptation of learning rate to volatility or the interaction of volatility and relative outcome value (good - bad). We note, no equivalent findings were observed for MASQ-AA in experiment 1 (see Appendix 6—figure 1). We could not fit the ‘general + MASQADrG + PSWQ’ model also described in Appendix 6—figure 1 to this dataset as participants were not administered the PSWQ questionnaire.
Appendix 7—figure 1.. Comparison of actual and…
Appendix 7—figure 1.. Comparison of actual and model generated numbers of switch trials in experiment 1.
For each participant, we calculated the number of trials on which they switched choice of shape. As described under parameter recovery, each participant’s posterior means for each of model #11 parameters were used together with model #11 to simulate 10 new datasets. For each of these simulated datasets, the number of switch trials was computed and correlated with the actual number of switch trial for the corresponding participant. This is shown here for an example dataset, with switch trials for each combination of task version (reward, aversive) and block type (volatile, stable) shown in a separate panel. Mean correlations between actual and generated switch trials were high (rs >0.88 across the 4 conditions and 10 datasets), demonstrating that the model can reproduce a basic qualitative feature of participants’ choice behavior.
Appendix 7—figure 2.. Comparison of actual and…
Appendix 7—figure 2.. Comparison of actual and model generated numbers of switch trials in experiment 2.
For each participant, we calculated the number of trials on which they switched choice of shape. As described under parameter recovery, each participant’s posterior means for each of model #11 parameters were used together with model #11 to simulate 10 new datasets. For each of these simulated datasets, the number of switch trials was computed and correlated with the actual number of switch trial for the corresponding participant. This is shown here for an example dataset, with switch trials for each combination of task version (reward, aversive) and block type (volatile, stable) shown in a separate panel. Mean correlations between actual and generated switch trials were high (rs >0.80 across the 4 conditions and 10 datasets), demonstrating that the model can reproduce a basic qualitative feature of participants’ choice behavior.
Author response image 1.. Information captured by…
Author response image 1.. Information captured by top five PCs: We conducted PCA on the item-level questionnaire responses from experiment 1 (n=86).
Scores on the first PC correlated highly with general factor scores (r=0.9). Scores on the second PC correlated strongly positively with PSWQ (r=0.59), and moderately negatively with MASQAD (r=-0.3),. The third PC correlated most strongly with MASQAA (r=0.5), but also correlated moderately with the BDI and CESD (r=0.19, r=0.18). The fourth PC correlated moderately with BDI (r=-0.25) but not with CESD (r=-0.03), perhaps capturing something specific to the BDI. The fifth PC did not correlate strongly with any subscale. Correlations are shown for the experiment 1 dataset only, on which the PCA and correlated factors model were estimated.
Author response image 2.
Author response image 2.

References

    1. Akaishi R, Umeda K, Nagase A, Sakai K. Autonomous mechanism of internal choice estimate underlies decision inertia. Neuron. 2014;81:195–206. doi: 10.1016/j.neuron.2013.10.018.
    1. Anderson TW, Rubin H. Statistical inference in factor analysis. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability; 1956. pp. 111–150.
    1. Aylward J, Valton V, Ahn WY, Bond RL, Dayan P, Roiser JP, Robinson OJ. Altered learning under uncertainty in unmedicated mood and anxiety disorders. Nature Human Behaviour. 2019;3:1116–1123. doi: 10.1038/s41562-019-0628-0.
    1. Bach DR, Hulme O, Penny WD, Dolan RJ. The Known Unknowns: Neural Representation of Second-Order Uncertainty, and Ambiguity. Journal of Neuroscience. 2011;31:4811–4820. doi: 10.1523/JNEUROSCI.1452-10.2011.
    1. Beck AT, Ward C, Mendelson M, Mock J, Erbaugh J. Beck depression inventory (BDI) Archives of General Psychiatry. 1961;4:561–571.
    1. Behrens TE, Woolrich MW, Walton ME, Rushworth MF. Learning the value of information in an uncertain world. Nature Neuroscience. 2007;10:1214–1221. doi: 10.1038/nn1954.
    1. Berns GS, Bell E. Striatal topography of probability and magnitude information for decisions under uncertainty. NeuroImage. 2012;59:3166–3172. doi: 10.1016/j.neuroimage.2011.11.008.
    1. Boswell JF, Thompson-Hollands J, Farchione TJ, Barlow DH. Intolerance of uncertainty: a common factor in the treatment of emotional disorders. Journal of Clinical Psychology. 2013;69:630–645. doi: 10.1002/jclp.21965.
    1. Boureau YL, Dayan P. Opponency revisited: competition and cooperation between dopamine and serotonin. Neuropsychopharmacology. 2011;36:74–97. doi: 10.1038/npp.2010.151.
    1. Brodbeck J, Abbott RA, Goodyer IM, Croudace TJ. General and specific components of depression and anxiety in an adolescent population. BMC Psychiatry. 2011;11:191. doi: 10.1186/1471-244X-11-191.
    1. Browning M, Behrens TE, Jocham G, O'Reilly JX, Bishop SJ. Anxious individuals have difficulty learning the causal statistics of aversive environments. Nature Neuroscience. 2015;18:590–596. doi: 10.1038/nn.3961.
    1. Carleton RN, Mulvogue MK, Thibodeau MA, McCabe RE, Antony MM, Asmundson GJ. Increasingly certain about uncertainty: intolerance of uncertainty across anxiety and depression. Journal of Anxiety Disorders. 2012;26:468–479. doi: 10.1016/j.janxdis.2012.01.011.
    1. Clark DA, Steer RA, Beck AT. Common and specific dimensions of self-reported anxiety and depression: implications for the cognitive and tripartite models. Journal of Abnormal Psychology. 1994;103:645–654. doi: 10.1037/0021-843X.103.4.645.
    1. Clark LA, Watson D. Tripartite model of anxiety and depression: Psychometric evidence and taxonomic implications. Journal of Abnormal Psychology. 1991;100:316–336. doi: 10.1037/0021-843X.100.3.316.
    1. Clark LA, Watson D. The Mini Mood and Anxiety Symptom Questionnaire (Mini-MASQ) Department of Psychology, University of Iowa; 1995.
    1. Cox SM, Frank MJ, Larcher K, Fellows LK, Clark CA, Leyton M, Dagher A. Striatal D1 and D2 signaling differentially predict learning from positive and negative outcomes. NeuroImage. 2015;109:95–101. doi: 10.1016/j.neuroimage.2014.12.070.
    1. Donahue CH, Lee D. Dynamic routing of task-relevant signals for decision making in dorsolateral prefrontal cortex. Nature Neuroscience. 2015;18:295–301. doi: 10.1038/nn.3918.
    1. Dugas MJ, Gagnon F, Ladouceur R, Freeston MH. Generalized anxiety disorder: a preliminary test of a conceptual model. Behaviour Research and Therapy. 1998;36:215–226. doi: 10.1016/S0005-7967(97)00070-3.
    1. Dugas MJ, Gosselin P, Ladouceur R. Intolerance of uncertainty and worry: investigating specificity in a nonclinical sample. Cognitive Therapy and Research. 2001;25:551–558. doi: 10.1023/A:1005553414688.
    1. Eldar E, Hauser TU, Dayan P, Dolan RJ. Striatal structure and function predict individual biases in learning to avoid pain. PNAS. 2016;113:4812–4817. doi: 10.1073/pnas.1519829113.
    1. Elliott R, Sahakian BJ, Herrod JJ, Robbins TW, Paykel ES. Abnormal response to negative feedback in unipolar depression: evidence for a diagnosis specific impairment. Journal of Neurology, Neurosurgery & Psychiatry. 1997;63:74–82. doi: 10.1136/jnnp.63.1.74.
    1. Eysenck HJ, Eysenck SBG. Manual of the Eysenck Personality Questionnaire (Junior and Adult) Hodder and Stoughton; 1975.
    1. Floyd FJ, Widaman KF. Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment. 1995;7:286–299. doi: 10.1037/1040-3590.7.3.286.
    1. Frank MJ, Moustafa AA, Haughey HM, Curran T, Hutchison KE. Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning. PNAS. 2007;104:16311–16316. doi: 10.1073/pnas.0706111104.
    1. Freeston MH, Rhéaume J, Letarte H, Dugas MJ, Ladouceur R. Why do people worry? Personality and Individual Differences. 1994;17:791–802. doi: 10.1016/0191-8869(94)90048-5.
    1. Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB. Bayesian Data Analysis. CRC press; 2013.
    1. Gelman A, Rubin DB. Inference from iterative simulation using multiple sequences. Statistical Science. 1992;7:457–472. doi: 10.1214/ss/1177011136.
    1. Gentes EL, Ruscio AM. A meta-analysis of the relation of intolerance of uncertainty to symptoms of generalized anxiety disorder, major depressive disorder, and obsessive-compulsive disorder. Clinical Psychology Review. 2011;31:923–933. doi: 10.1016/j.cpr.2011.05.001.
    1. Horn JL. A rationale and test for the number of factors in factor analysis. Psychometrika. 1965;30:179–185. doi: 10.1007/BF02289447.
    1. Humphreys LG, Montanelli RG. An investigation of the parallel analysis criterion for determining the number of common factors. Multivariate Behavioral Research. 1975;10:193–205. doi: 10.1207/s15327906mbr1002_5.
    1. Huys QJ, Daw ND, Dayan P. Depression: a decision-theoretic analysis. Annual Review of Neuroscience. 2015;38:1–23. doi: 10.1146/annurev-neuro-071714-033928.
    1. Ito M, Doya K. Validation of decision-making models and analysis of decision variables in the rat basal ganglia. Journal of Neuroscience. 2009;29:9861–9874. doi: 10.1523/JNEUROSCI.6157-08.2009.
    1. Jennrich RI, Bentler PM. Exploratory Bi-factor analysis. Psychometrika. 2011;76:537–549. doi: 10.1007/s11336-011-9218-4.
    1. Jöreskog KG. On the estimation of polychoric correlations and their asymptotic covariance matrix. Psychometrika. 1994;59:381–389. doi: 10.1007/BF02296131.
    1. Lau B, Glimcher PW. Dynamic response-by-response models of matching behavior in rhesus monkeys. Journal of the Experimental Analysis of Behavior. 2005;84:555–579. doi: 10.1901/jeab.2005.110-04.
    1. Li CH. Confirmatory factor analysis with ordinal data: comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods. 2016;48:936–949. doi: 10.3758/s13428-015-0619-7.
    1. Li J, Daw ND. Signals in human striatum are appropriate for policy update rather than value prediction. Journal of Neuroscience. 2011;31:5504–5511. doi: 10.1523/JNEUROSCI.6316-10.2011.
    1. Lissek S, Powers AS, McClure EB, Phelps EA, Woldehawariat G, Grillon C, Pine DS. Classical fear conditioning in the anxiety disorders: a meta-analysis. Behaviour Research and Therapy. 2005;43:1391–1424. doi: 10.1016/j.brat.2004.10.007.
    1. Meyer TJ, Miller ML, Metzger RL, Borkovec TD. Development and validation of the penn state worry questionnaire. Behaviour Research and Therapy. 1990;28:487–495. doi: 10.1016/0005-7967(90)90135-6.
    1. Mkrtchian A, Aylward J, Dayan P, Roiser JP, Robinson OJ. Modeling avoidance in mood and anxiety disorders using reinforcement learning. Biological Psychiatry. 2017;82:532–539. doi: 10.1016/j.biopsych.2017.01.017.
    1. Nassar MR, Rumsey KM, Wilson RC, Parikh K, Heasly B, Gold JI. Rational regulation of learning dynamics by pupil-linked arousal systems. Nature Neuroscience. 2012;15:1040–1046. doi: 10.1038/nn.3130.
    1. Palminteri S, Pessiglione M. Decision Neuroscience. Academic Press; 2017. Opponent brain systems for reward and punishment learning: causal evidence from drug and lesion studies in humans; pp. 291–303.
    1. Payzan-LeNestour E, Dunne S, Bossaerts P, O'Doherty JP. The neural representation of unexpected uncertainty during value-based decision making. Neuron. 2013;79:191–201. doi: 10.1016/j.neuron.2013.04.037.
    1. Pulcu E, Browning M. Affective Bias as a rational response to the statistics of rewards and punishments. eLife. 2017;6:e27879. doi: 10.7554/eLife.27879.
    1. Qualtrics L. 0.1Qualtrics. 2014
    1. Radloff LS. The CES-D scale: a self-report depression scale for research in the general population. Applied Psychological Measurement. 1977;1:385–401. doi: 10.1177/014662167700100306.
    1. Reise SP. Invited paper: the rediscovery of bifactor measurement models. Multivariate Behavioral Research. 2012;47:667–696. doi: 10.1080/00273171.2012.715555.
    1. Robinson OJ, Chase HW. Learning and choice in mood disorders: searching for the computational parameters of anhedonia. Computational Psychiatry. 2017;1:208–233. doi: 10.1162/CPSY_a_00009.
    1. Salvatier J, Wiecki TV, Fonnesbeck C. Probabilistic programming in Python using PyMC3. PeerJ Computer Science. 2016;2:e55. doi: 10.7717/peerj-cs.55.
    1. Schmid J, Leiman JM. The development of hierarchical factor solutions. Psychometrika. 1957;22:53–61. doi: 10.1007/BF02289209.
    1. Simms LJ, Grös DF, Watson D, O'Hara MW. Parsing the general and specific components of depression and anxiety with bifactor modeling. Depression and Anxiety. 2008;25:E34–E46. doi: 10.1002/da.20432.
    1. Spielberger CD, Gorsuch RL, Lushene R, Vagg PR, Jacobs GA. Manual for the State-Trait Anxiety Inventory. Consulting Psychologists Press; 1983.
    1. Steele JD, Kumar P, Ebmeier KP. Blunted response to feedback information in depressive illness. Brain. 2007;130:2367–2374. doi: 10.1093/brain/awm150.
    1. Steer RA, Clark DA, Beck AT, Ranieri WF. Common and specific dimensions of self-reported anxiety and depression: a replication. Journal of Abnormal Psychology. 1995;104:542–545. doi: 10.1037/0021-843X.104.3.542.
    1. Steer RA, Clark DA, Beck AT, Ranieri WF. Common and specific dimensions of self-reported anxiety and depression: the BDI-II versus the BDI-IA. Behaviour Research and Therapy. 1999;37:183–190. doi: 10.1016/S0005-7967(98)00087-4.
    1. Steer RA, Clark DA, Kumar G, Beck AT. Common and specific dimensions of Self-Reported anxiety and depression in adolescent outpatients. Journal of Psychopathology and Behavioral Assessment. 2008;30:163–170. doi: 10.1007/s10862-007-9060-2.
    1. Vehtari A, Gelman A, Gabry J. Practical bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 2017;27:1413–1432. doi: 10.1007/s11222-016-9696-4.
    1. Watanabe S, Opper M. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research. 2010;11:3571–3594.
    1. Watson D, Clark LA. The Mood and Anxiety Symptom Questionnaire (MASQ) Department of Psychology, University of Iowa; 1991.
    1. Yu AJ, Dayan P. Uncertainty, neuromodulation, and attention. Neuron. 2005;46:681–692. doi: 10.1016/j.neuron.2005.04.026.
    1. Zinbarg RE, Barlow DH. Structure of anxiety and the anxiety disorders: a hierarchical model. Journal of Abnormal Psychology. 1996;105:181–193. doi: 10.1037/0021-843X.105.2.181.

Source: PubMed

3
Subskrybuj