Listeners' perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature

Louise Goupil, Emmanuel Ponsot, Daniel Richardson, Gabriel Reyes, Jean-Julien Aucouturier, Louise Goupil, Emmanuel Ponsot, Daniel Richardson, Gabriel Reyes, Jean-Julien Aucouturier

Abstract

The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners' perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature. Using a data-driven method, we separately decode the prosodic features driving listeners' perceptions of a speaker's certainty and honesty across pitch, duration and loudness. We find that these two kinds of judgments rely on a common prosodic signature that is perceived independently from individuals' conceptual knowledge and native language. Finally, we show that listeners extract this prosodic signature automatically, and that this impacts the way they memorize spoken words. These findings shed light on a unique auditory adaptation that enables human listeners to quickly detect and react to unreliability during linguistic interactions.

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1. Reverse correlation results (study 1).
Fig. 1. Reverse correlation results (study 1).
a Dynamic prosodic representations. Normalized kernels derived from the reverse correlation analyses in both tasks (top: certainty, blue; bottom: honesty, green) across the three acoustical dimensions (pitch, loudness, and duration). Filter amplitudes (a.u., arbitrary units) correspond to the values obtained for each participant, task, acoustic dimension, and segment by subtracting the average (pitch, loudness, and duration) values obtained for stimuli judged as certain/honest from the values averaged for the unchosen stimuli and normalizing these values for each participant by dividing them by the sum of their absolute values. Data show group averages, with shaded areas showing the SEMs. Significant deviations from zero (one-sample two-sided t tests) are indicated at the corresponding segment positions by circles, with increasing sizes corresponding to p < 0.1; p < 0.05; p < 0.01, and p < 0.001; certainty task (p values per segment for pitch: 0.86, 0.69, 0.91, 0.64, 0.77, 0.49, 0.11, 0.11, 0.13, 0.14, 0.01, 0.004; loudness: 0.0005, 0.51, 0.37, 0.007, 0.38, 0.18, 0.0001, 0.0001, 0.44, 0.22, 0.12, 0.16; duration: 0.6, 0.03, 0.07, 0.04, 0.94); honesty task (pitch: 0.33, 0.29, 0.44, 0.34, 0.14, 0.09, 0.03, 0.06, 0.08, 0.30, 0.62, 0.30; loudness: 0.29, 0.24, 0.07, 0.01, 0.002, 0.53, 0.96, 0.17, 0.42, 0.5, 0.098, 0.88; duration: 0.98, 0.24, 0.30, 0.048, 0.94). Kernels were computed for 5 time points for duration (corresponding to the initial values of the audio transformations) and in 12 time points for pitch and duration (corresponding to post-transformation acoustic analysis of the stimuli, see “Methods”). Individual raw (i.e., non-normalized) kernels are shown in Fig. SII.a.b Sensitivity to mean features. To assess the extent to which mean pitch, loudness, and duration affected participants’ judgments at a static level, we constructed for each participant and task psychometric functions relating sensory evidence (computed for each trial as the area under the curve corresponding to the difference between the dynamic profiles of the first minus second stimuli) to participant’s choices (i.e., the probability to choose the first stimulus). Bar plots show the slopes averaged over the group separately in each task, with error bars showing the SEM. Dots show individual data. The white asterisk shows the result of one-sample Wilcoxon signed-rank test with p < 0.05; pitch (0.33/0.19), loudness (0.4/0.84), duration (0.012/0.053). c Sensitivity to feature variability. For each trial, the standard deviation of the pitch, loudness, and duration for the stimuli judged as more reliable (honest, certain) were subtracted from the stimuli judged as less reliable (lying, doubtful; Δ: difference). Bar plots show the slopes averaged over the group separately in each task, with error bars showing the SEM. Dots show individual data. White asterisks show the result of one-sample t test against chance with p < 0.05; **p < 0.01; ***p < 0.001; pitch (certainty p = 0.017/honesty p = 0.0002); loudness variability (0.7/0.4); duration variability (0.009/0.0007). Source data are provided as a Source data file.
Fig. 2. Stability and precision of the…
Fig. 2. Stability and precision of the perceptual decisions made in the two tasks.
a Top: percentage of agreement across the two tasks (computed as the percentage of trials in which stimuli were classified similarly: voices classified as certain and honest versus doubting and lying correspond to an agreement). White asterisks show the significance of the result of the two-sided t test comparing the percentage of agreement between tasks with chance level (50%) and reported in the main text, with *** corresponding to p < 0.001. Bottom: normalized (z-scored) confidence ratings averaged separately for agreements and disagreements. Black asterisk shows the result of the two-sided t test comparing confidence for agreements versus disagreements reported in the main text, with *** corresponding to p < 0.001. Data are presented as mean values with error bars showing the 95% confidence interval. Dots show individual data. b Top: percentage of agreement within each task, computed as the percentage of double-pass trials in which stimuli were classified similarly. White asterisks show the significance of the result of the two-sided t test comparing the percentage of agreement within each task with chance level (50%) reported in the main text, with *** corresponding to p < 0.001. The black asterisk shows the results of the two-sided t test comparing the two tasks reported in the main text; *p = 0.02. Bottom: confidence ratings depending on agreement in the honesty (green) and certainty (blue) tasks. Green (honesty task) and blue (certainty task) asterisks show the result of the two-sided t test comparing confidence for agreements versus disagreements within each task, with *** corresponding to p < 0.001. Data are presented as mean values with error bars showing the 95% confidence interval. Dots show individual data. c Probability of responding that the first voice (p(choose S1)) sounds more certain (left, blue) or honest (right, green) as a function of the area under the curve computed by subtracting sensory evidence for the first minus the second stimuli, summed for the three acoustic dimensions. Darker lines correspond to high confidence trials (above the median) and lighter lines to low confidence trials (below the median). Circles show mean values and error bars the 95% confidence interval. d Average confidence, sensitivity, metacognitive sensitivity, and efficiency in the honesty and certainty tasks. Data represent mean values with error bars showing the 95% confidence interval, and dots show the individual data; black asterisks show the result of the two-sided tests comparing the two tasks, and white asterisks show the results of two-sided tests against chance level; t tests were used for confidence (normally distributed data), and Wilcoxon signed-rank tests for sensitivity, metacognitive sensitivity, and efficiency (non-normal data); *p < 0.05; **p < 0.01; ***p < 0.001; confidence: p values for the comparison between tasks p = 0.037; sensitivity: p values testing the difference with chance level, for certainty p = 0.0011/honesty, p = 0.012; comparison between tasks, p = 0.01; metacognitive sensitivity (0.0004/0.034/0.026); metacognitive efficiency (0.0004/0.01/0.72). Source data are provided as a Source data file.
Fig. 3. Relationship between perceptual and conceptual…
Fig. 3. Relationship between perceptual and conceptual knowledge (study 2B).
a Normalized (z-scored) ratings in the certainty (top, blue; N = 20) and honesty (bottom, green; N = 20) tasks for each participant and prosody type (shown by different hues). Bar plots represent individual participants’ mean normalized ratings for each prosodic archetype, with error bars showing the 95% confidence interval. Data were sorted by effect magnitude. Squared markers below the plot show the listener’s gender (black: female; gray: male). Asterisks show the results of paired two-sided sample t tests comparing reliable versus unreliable prosodies for each individual listener, with *p < 0.05; **p < 0.01; ***p < 0.001 (individual p values are reported in the Source data file). At the level of the group, in the certainty task, both honest and certain prosodies were judged as more certain than doubtful (honest: p < 0.001 Bonferroni corrected post hoc Tukey HSD, d = 3.72; certain: p < 0.001, d = 4.14) and lying (honest: p < 0.001, d = 3.23; certain: p < 0.001, d = 3.72) prosodies. In the honesty task, greater inter-individual differences were observed (see detailed report in the main text). b Normalized ratings were split depending on participants’ responses at the explicit questions assessing their conceptual knowledge about epistemic prosody, which revealed that the relationship between prosody type and ratings did not vary with participants’ conceptual knowledge about certainty and honesty in general, with the exception of concepts about speed in the honesty task (shown by the green asterisk that represent the significant interaction between concepts about speed and prosody type on ratings of honesty). Data are presented as mean values with error bars showing the 95% confidence interval. Triple asterisks (***) show the significant results of the rmANOVA testing the interaction between concepts about speed and prosody in the honesty task, with normalized ratings as a dependent variable, p = 0.0007 (all other interactions were not significant). Source data and exact individual p values for a are provided as a Source data file.
Fig. 4. Cross-linguistic validation for the certainty…
Fig. 4. Cross-linguistic validation for the certainty task including the group of French speakers (N = 20), a group of native English speakers (N = 22), and a group of Spanish speakers (N = 21).
Normalized ratings (z-scored) were averaged separately for each prosodic archetype and language group. Data are presented as mean values with error bars showing the 95% confidence interval. Crosses represent individual data for each prosodic archetype and native language. As was the case in the group of French speakers, Spanish and English speakers perceived certain/honest archetypes to be more certain than doubt/lies archetypes (see main text for details). They also judged certain prosody to be more certain than honest prosodies (p < 0.005, N = 21 Spanish speakers: d = 0.4; N = 22 English speakers: d = 0.7) and lying prosodies to be more certain than doubtful prosodies (p < 0.001, Spanish: d = 0.7; English: d = 0.8), showing the same sensitivity to small variations in the gain of the archetypes. Source data are provided as a Source data file.
Fig. 5. Automatic impact of the common…
Fig. 5. Automatic impact of the common prosodic signature on verbal working memory (study 3).
a Design of the memorization task. Participants heard six spoken pseudo-words before having to recognize a target pseudo-word presented along with two distractors. Unbeknown to the participant, the spoken targets were pronounced with the archetypes of prosodies derived from Study 1 and were either reliable (certain or honest) or unreliable (lie or doubt), while the prosody of the five spoken distractors was randomly picked from the same pool of stimuli, ensuring equal saliency of the target and distractors. b Main results of the memory task. Differences (Δ) between d’ (left), response times (middle) and confidence (right) for reliable minus unreliable prosodic archetypes. Data are presented as mean values with error bars showing the 95% confidence interval. Dots show individual data. Unreliable prosodies were memorized better and faster than reliable prosodies and were associated with more confident ratings. Black asterisks show the results of the two-sided paired t tests comparing reliable and unreliable prosodies, with **p = 0.01; *p < 0.05; d’: p = 0.01; response times: p = 0.026; confidence: p = 0.035. c Recency effect. Top: accuracy (top left) and confidence (top right) for reliable (light gray) and unreliable (dark gray) prosodic archetypes as a function of the position of the target within the audio stream. Bottom: differences between reliable minus reliable prosodic archetypes (black). There was no main interaction between position and prosody for accuracy, but the impact of prosody on confidence judgments interacted with target position such that recent unreliable targets lead to increased confidence. Data are presented as mean values with error bars showing the 95% confidence interval; *p < 0.05; **p = 0.01. Source data are provided as a Source data file.

References

    1. Sperber D, et al. Epistemic vigilance. Mind Lang. 2010;25:359–393. doi: 10.1111/j.1468-0017.2010.01394.x.
    1. Vrij, A., Hartwig, M. & Granhag, P. A. Reading lies: nonverbal communication and deception. Annu. Rev. Psychol. 70, 295–317 (2019).
    1. ten Brinke L, Vohs KD, Carney DR. Can ordinary people detect deception after all? Trends Cogn. Sci. 2016;20:579–588. doi: 10.1016/j.tics.2016.05.012.
    1. Bahrami B, et al. Optimally interacting minds. Science. 2010;329:1081–1085. doi: 10.1126/science.1185718.
    1. Poulin-Dubois D, Brosseau-Liard P. The developmental origins of selective social learning. Curr. Dir. Psychol. Sci. 2016;25:60–64. doi: 10.1177/0963721415613962.
    1. Mercier, H. Not Born Yesterday (Princeton University Press, 2020).
    1. de Haan, F. The relation between modality and evidentiality. Linguist. Berichte9, 201–216 (2001).
    1. Fusaroli R, et al. Coming to terms. Psychol. Sci. 2012;23:931–939. doi: 10.1177/0956797612436816.
    1. Roseano, P., González, M., Borràs-Comes, J. & Prieto, P. Communicating epistemic stance: how speech and gesture patterns reflect epistemicity and evidentiality. Discourse Process. 53, 135–174 (2016).
    1. Bang D, et al. Confidence matching in group decision-making. Nat. Hum. Behav. 2017;1:1–7. doi: 10.1038/s41562-017-0117.
    1. Goupil, L. & Kouider, S. Developing a reflective mind: from core metacognition to explicit self-reflection. Curr. Dir. Psychol. Sci. 28, 403–408 (2019).
    1. Shea N, et al. Supra-personal cognitive control and metacognition. Trends Cogn. Sci. 2014;18:186–193. doi: 10.1016/j.tics.2014.01.006.
    1. Brennan SE, Williams M. The feeling of another′s knowing: prosody and filled pauses as cues to listeners about the metacognitive states of speakers. J. Mem. Lang. 1995;34:383–398. doi: 10.1006/jmla.1995.1017.
    1. Jiang X, Pell MD. The sound of confidence and doubt. Speech Commun. 2017;88:106–126. doi: 10.1016/j.specom.2017.01.011.
    1. Holtgraves T, Lasky B. Linguistic power and persuasion. J. Lang. Soc. Psychol. 1999;18:196–205. doi: 10.1177/0261927X99018002004.
    1. Van Zant AB, Berger J. How the voice persuades. J. Pers. Soc. Psychol. 2019;118:661–682. doi: 10.1037/pspi0000193.
    1. Jiang X, Pell MD. The feeling of another’s knowing: how ‘mixed messages’ in speech are reconciled. J. Exp. Psychol. Hum. Percept. Perform. 2016;42:1412–1428. doi: 10.1037/xhp0000240.
    1. Goupil, L. & Aucouturier, J.-J. Event-related prosody reveals distinct acoustic manifestations of accuracy and confidence in speech. Preprint at 10.31234/ (2019).
    1. Kimble CE, Seidel SD. Vocal signs of confidence. J. Nonverbal Behav. 1991;15:99–105. doi: 10.1007/BF00998265.
    1. Jiang X, Pell MD. Neural responses towards a speaker’s feeling of (un)knowing. Neuropsychologia. 2016;81:79–93. doi: 10.1016/j.neuropsychologia.2015.12.008.
    1. Dezecache G, Mercier H, Scott-Phillips TC. An evolutionary approach to emotional communication. J. Pragmat. 2013;59:221–233. doi: 10.1016/j.pragma.2013.06.007.
    1. Wharton, T. Pragmatics and Non-Verbal Communication (Cambridge University Press, 2009).
    1. Gussenhoven, C. Intonation and interpretation: phonetics and phonology. In 2002 Proc. First International Conference on Speech Prosody 47–57 (ISCA, 2002).
    1. Lewis, D. Convention (Harvard University Press, 1969).
    1. Grice, H. P. Meaning (Philosophical Review, 1957).
    1. Ponsot, E., Burred, J. J., Belin, P. & Aucouturier, J.-J. Cracking the social code of speech prosody using reverse correlation. Proc. Natl. Acad. Sci. USA115, 3972–3977 (2018).
    1. Armstrong MM, Lee AJ, Feinberg DR. A house of cards: bias in perception of body size mediates the relationship between voice pitch and perceptions of dominance. Anim. Behav. 2019;147:43–51. doi: 10.1016/j.anbehav.2018.11.005.
    1. Juslin, P. N., Laukka, P. & Bänziger, T. The mirror to our soul? Comparisons of spontaneous and posed vocal expression of emotion. J. Nonverbal Behav. 10.1007/s10919-017-0268-x (2018).
    1. Scherer, K. R. in Chinese Spoken Language Processing. ISCSLP 2006. Lecture Notes in Computer Science (eds Huo, Q., Ma, B., Chng, E. S., Li, H.) 13–14 (Springer, Berlin, 2006).
    1. Yap, T. F. Speech Production Under Cognitive Load: Effects and Classification. PhD thesis, The University of New South Wales (2012).
    1. Giddens, C. L., Barron, K. W., Byrd-Craven, J., Clark, K. F. & Winter, A. S. Vocal indices of stress: a review. J. Voice10.1016/j.jvoice.2012.12.010 (2013).
    1. Scherer, K. R., Grandjean, D., Johnstone, T., Klasmeyer, G. & Bänziger, T. Acoustic correlates of task load and stress. in 7th International Conference on Spoken Language Processing, ICSLP 2002 (ICSLP, 2002).
    1. Berthold, A. & Jameson, A. Interpreting symptoms of cognitive load in speech input. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 235–244 (Springer Verlag, 1999).
    1. Le, P. N. The Use of Spectral Information in the Development of Novel Techniques for Speech-Based Cognitive Load Classification. PhD thesis, The University of New South Wales (2012).
    1. Ackerman R, Zalmanov H. The persistence of the fluency-confidence association in problem solving. Psychon. Bull. Rev. 2012;19:1187–1192. doi: 10.3758/s13423-012-0305-z.
    1. Proust, J. in Foundations of Metacognition (eds. Beran, M. J., Brandl, J. L., Perner, J. & Proust, J.) 234–251 (Oxford University Press, 2012).
    1. Zuckerman M, DePaulo BM, Rosenthal R. Verbal and nonverbal communication of deception. Adv. Exp. Soc. Psychol. 1981;14:1–59. doi: 10.1016/S0065-2601(08)60369-X.
    1. Proust, J. in Embodied Communication in Humans and Machines Ch. 15 (Oxford University Press, 2012).
    1. Villar G, Arciuli J, Paterson H. Vocal pitch production during lying: beliefs about deception matter. Psychiatry Psychol. Law. 2013;20:123–132. doi: 10.1080/13218719.2011.633320.
    1. Bond CF, DePaulo BM. Accuracy of deception judgments. Pers. Soc. Psychol. Rev. 2006;10:214–234. doi: 10.1207/s15327957pspr1003_2.
    1. DePaulo BM, et al. Cues to deception. Psychol. Bull. 2003;129:74–118. doi: 10.1037/0033-2909.129.1.74.
    1. Fish K, Rothermich K, Pell MD. The sound of (in)sincerity. J. Pragmat. 2017;121:147–161. doi: 10.1016/j.pragma.2017.10.008.
    1. Spence, K., Arciuli, J. & Villar, G. The role of pitch and speech rate as markers of deception in Italian speech. Frontiers in Psychology. 3, 453 (2012).
    1. Adolphs, R., Nummenmaa, L., Todorov, A. & Haxby, J. V. Data-driven approaches in the investigation of social perception. Philos. Trans. R. Soc. B Biol. Sci.10.1098/rstb.2015.0367 (2016).
    1. Jack RE, Schyns PG. Toward a social psychophysics of face communication. Annu. Rev. Psychol. 2017 doi: 10.1146/annurev-psych-010416-044242.
    1. Guyer JJ, Fabrigar LR, Vaughan-Johnston TI. Speech rate, intonation, and pitch: investigating the bias and cue effects of vocal confidence on persuasion. Personal. Soc. Psychol. Bull. 2018;45:389–405. doi: 10.1177/0146167218787805.
    1. Murray, R. F. Classification images: a review. J. Vis.10.1167/11.5.2 (2011). 10.1167/11.5.1
    1. Green, D. M. Consistency of auditory detection judgments. Psychol. Rev. (1964).
    1. Burgess, A. E. & Colborne, B. Visual signal detection. IV. Observer inconsistency. J. Opt. Soc. Am. A5, 617–627 (1988).
    1. Neri, P. How inherently noisy is human sensory processing? Psychon. Bull. Rev. 17, 802–808 (2010).
    1. Ponsot E, Arias P, Aucouturier J-J. Uncovering mental representations of smiled speech using reverse correlation. J. Acoust. Soc. Am. 2018;143:EL19–EL24. doi: 10.1121/1.5020989.
    1. de Gardelle V, Mamassian P. Weighting mean and variability during confidence judgments. PLoS ONE. 2015;10:e0120870. doi: 10.1371/journal.pone.0120870.
    1. McCurdy LY, et al. Anatomical coupling between distinct metacognitive systems for memory and visual perception. J. Neurosci. 2013;33:1897–1906. doi: 10.1523/JNEUROSCI.1890-12.2013.
    1. Laukka, P. & Elfenbein, H. A. Cross-cultural emotion recognition and in-group advantage in vocal expression: a meta-analysis. Emot. Rev. 10.1177/175407391989729 (2020).
    1. Chen, A. & Gussenhoven, C. Language-dependence in the signalling of attitude in speech. in Proc. Workshop on the Subtle Expressivity of Emotion (2003).
    1. Shochi, T., Rilliard, A., Aubergé, V. & Erickson, D. Intercultural perception of English, French and Japanese social affective prosody. Linguistic Insights Stud. Lang. Commun. 97, 31–60 (2009).
    1. Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R. & Schyns, P. G. Facial expressions of emotion are not culturally universal. Proc. Natl. Acad. Sci. USA (2012). 10.1073/pnas.1200155109
    1. House, D., Karlsson, A. & Svantesson, J.-O. When epistemic meaning overrides the constraints of lexical tone: a case from Kammu. in Satelite Workshop at TIE 2016 (Lund University, 2016).
    1. Vlassova A, Donkin C, Pearson J. Unconscious information changes decision accuracy but not confidence. Proc. Natl Acad. Sci. USA. 2014;111:16214–16218. doi: 10.1073/pnas.1403619111.
    1. Arnal LH, Flinker A, Kleinschmidt A, Giraud AL, Poeppel D. Human screams occupy a privileged niche in the communication soundscape. Curr. Biol. 2015;25:2051–2056. doi: 10.1016/j.cub.2015.06.043.
    1. Scott-Phillips, T. C. Speaking Our Minds: Why Human Communication is Different, and How Language Evolved to Make it Special (Palgrave Macmillan, 2015).
    1. Nencheva, M., Piazza, E. A. & Lew-Williams, C. The moment-to-moment pitch dynamics of child-directed speech shape toddlers’ attention and learning. Dev. Sci. 10.1111/desc.12997 (2020).
    1. Crivelli C, Fridlund AJ. Facial displays are tools for social influence. Trends Cogn. Sci. 2018;22:388–399. doi: 10.1016/j.tics.2018.02.006.
    1. Pleskac TJ, Busemeyer JR. Two-stage dynamic signal detection: a theory of choice, decision time, and confidence. Psychol. Rev. 2010;117:864–901. doi: 10.1037/a0019737.
    1. Patel D, Fleming SM, Kilner JM. Inferring subjective states through the observation of actions. Proc. R. Soc. B Biol. Sci. 2012;279:4853–4860. doi: 10.1098/rspb.2012.1847.
    1. Brooks JA, Freeman JB. Conceptual knowledge predicts the representational structure of facial emotion perception. Nat. Hum. Behav. 2018;2:581–591. doi: 10.1038/s41562-018-0376-6.
    1. Greenberg, D. M., Warrier, V., Allison, C. & Baron-Cohen, S. Testing the empathizing–systemizing theory of sex differences and the extreme male brain theory of autism in half a million people. Proc. Natl. Acad. Sci. USA115, 12152–12157 (2018).
    1. Capraro V. Gender differences in lying in sender-receiver games: a meta-analysis. Judgm. Decis. Mak. 2017;13:345–355.
    1. Burgoon, J. K., Buller, D. B., Blair, J. P. & Tilley, P. in Sex Differences and Similarities in Communication (eds Dindia, K. & Canary, D. J.) 263–280 (Lawrence Erlbaum Associates Publishers, 2006).
    1. Sutherland, C. A. M. et al. Individual differences in trust evaluations are shaped mostly by environments, not genes. Proc. Natl. Acad. Sci. USA117, 10218–10224 (2020).
    1. Mahrholz, G., Belin, P. & McAleer, P. Judgements of a speaker’s personality are correlated across differing content and stimulus type. PLoS ONE13, e0204991 (2018).
    1. Sumner M, Kim SK, King E, McGowan KB. The socially weighted encoding of spoken words: a dual-route approach to speech perception. Front. Psychol. 2014;4:1015. doi: 10.3389/fpsyg.2013.01015.
    1. Kassin SM. Paradigm shift in the study of human lie-detection: bridging the gap between science and practice. J. Appl. Res. Mem. Cogn. 2012;1:118–119. doi: 10.1016/j.jarmac.2012.04.009.
    1. Lazer, D. M. J. et al. The science of fake news. Science359, 1094–1096 (2018).
    1. Burred, J. J., Ponsot, E., Goupil, L., Liuni, M. & Aucouturier, J. J. Cleese: an open-source audio-transformation toolbox for data-driven experiments in speech and music cognition. PLoS ONE14, e0205943 (2019).
    1. Kuznetsova, A., Brockhoff, P. B. & Christensen, H. B. lmerTest: tests for random and fixed effects for linear mixed effect models (lmer objects of lme4 package). R package version 2.0-3 (2014).
    1. Fleming SM, Lau HC. How to measure metacognition. Front. Hum. Neurosci. 2014;8:1–9. doi: 10.3389/fnhum.2014.00443.
    1. Peirce JW. PsychoPy-Psychophysics software in Python. J. Neurosci. Methods. 2007;162:8–13. doi: 10.1016/j.jneumeth.2006.11.017.

Source: PubMed

3
S'abonner