Streamlining experiment design in cognitive hearing science using OpenSesame

Eleonora Sulas, Pierre-Yves Hasan, Yue Zhang, François Patou, Eleonora Sulas, Pierre-Yves Hasan, Yue Zhang, François Patou

Abstract

Auditory science increasingly builds on concepts and testing paradigms originated in behavioral psychology and cognitive neuroscience - an evolution of which the resulting discipline is now known as cognitive hearing science. Experimental cognitive hearing science paradigms call for hybrid cognitive and psychobehavioral tests such as those relating the attentional system, working memory, and executive functioning to low-level auditory acuity or speech intelligibility. Building complex multi-stimuli experiments can rapidly become time-consuming and error-prone. Platform-based experiment design can help streamline the implementation of cognitive hearing science experimental paradigms, promote the standardization of experiment design practices, and ensure reliability and control. Here, we introduce a set of features for the open-source python-based OpenSesame platform that allows the rapid implementation of custom behavioral and cognitive hearing science tests, including complex multichannel audio stimuli while interfacing with various synchronous inputs/outputs. Our integration includes advanced audio playback capabilities with multiple loudspeakers, an adaptive procedure, compatibility with standard I/Os and their synchronization through implementation of the Lab Streaming Layer protocol. We exemplify the capabilities of this extended OpenSesame platform with an implementation of the three-alternative forced choice amplitude modulation detection test and discuss reliability and performance. The new features are available free of charge from GitHub: https://github.com/elus-om/BRM_OMEXP .

Keywords: Cognitive hearing science; Experiment building; Experiment design; OpenSesame.

Conflict of interest statement

The authors declare that they have no conflicts of interest.

© 2022. The Author(s).

Figures

Fig. 1
Fig. 1
Visual cues used for the 3-AFC-AMDT. Three sound stimuli are sequentially played and each stimulus is associated to one of the red rectangles displayed on a screen in front of the participant
Fig. 2
Fig. 2
3AFC AMDT logic translated to the experiment structure
Fig. 3
Fig. 3
3AFC AMDT experiment sequence: it consists of a collection of actions (plugins) which will be run in a sequential order. Calibration whitenoise, new lsl start 1, new adaptive init, stimulus and change represent the new plugins introduced in this manuscript
Fig. 4
Fig. 4
Inline script to generate the sound stimuli
Fig. 5
Fig. 5
Mixer plugin
Fig. 6
Fig. 6
Adaptive plugin
Fig. 7
Fig. 7
Automatic answer generated using an inline script plugin
Fig. 8
Fig. 8
The psychometric function that was used to generate the synthetic response: the orange circle represents the midpoint of the curve and the blue dashed lines represent the expected convergence [depth] value our 3-down 1-up adaptive procedure
Fig. 9
Fig. 9
LSL start plugin
Fig. 10
Fig. 10
Visual representation of the sound stimuli and the specific moments at which the LSL messages are recorded. The Audio Mixer plugin plays three different audio stimuli sequentially, depicted as striped boxes. The orange rectangles represent the silent breaks between the three sound stimuli. For each audio stimulus, the lsl message plugin saves an LSL marker for the time in which the audio files starts and stops to be played
Fig. 11
Fig. 11
3-down-1-up procedure results for the first simulated test participant and for the first of the four conditions (frequency = 400 Hz and modulation frequency = 5 Hz)
Fig. 12
Fig. 12
Platform behavior validation of the adaptive plugins. The x-axis represents the 40 tested conditions, four combinations of frequencies, and modulation frequencies for the ten synthetic subjects. The y-axis shows the depth value range. The black line shows the depthmean(j), where j goes from 1st to the 40th tested condition, and the error bars represents the depthstd(j). The dashed green line gives the expected convergence value (=-7.3622 dB)
Fig. 13
Fig. 13
The acquired LSL sound stream is represented: on the top, the whole sound stream consisting of 144 AM tones is shown; on the bottom, the sounds related to one trial (3x3 audio files) are depicted
Fig. 14
Fig. 14
Random error distribution for the ten simulated subjects responses

References

    1. Alhanbali S, Dawes P, Millman RE, Munro KJ. Measures of listening effort are multidimensional. Ear and Hearing. 2019;40(5):1084. doi: 10.1097/AUD.0000000000000697.
    1. Archer-Boyd AW, Southwell RV, Deeks JM, Turner RE, Carlyon RP. Development and validation of a spectro-temporal processing test for cochlear-implant listeners. The Journal of the Acoustical Society of America. 2018;144(5):2983. doi: 10.1121/1.5079636.
    1. Arlinger S, Lunner T, Lyxell B, Kathleen Pichora-Fuller M. The emergence of cognitive hearing science. Scandinavian Journal of Psychology. 2009;50(5):371. doi: 10.1111/j.1467-9450.2009.00753.x.
    1. Aronoff JM, Landsberger DM. The development of a modified spectral ripple test. The Journal of the Acoustical Society of America. 2013;134(2):EL217. doi: 10.1121/1.4813802.
    1. Bechtold, B., et al. (2019). Soundfile. .
    1. Bencina, R., & Burk, P. (2001). PortAudio - an open-source cross platform audio API in ICMC.
    1. Brand, T., & Kollmeier, B. (2002). Efficient adaptive procedures for threshold and concurrent slope estimates for psychophysics and speech intelligibility tests, (Vol. 111). 10.1121/1.1479152
    1. De Ruiter AM, Debruyne JA, Chenault MN, Francart T, Brokx JPL. Amplitude modulation detection and speech recognition in late-implanted prelingually and postlingually deafened cochlear implant users. Ear and Hearing. 2015;36(5):557. doi: 10.1097/AUD.0000000000000162.
    1. Dimitrijevic A, Alsamri J, John MS, Purcell D, George S, Zeng FG. Human envelope following responses to amplitude modulation: Effects of aging and modulation depth. Ear and Hearing. 2016;37(5):e322. doi: 10.1097/AUD.0000000000000324.
    1. Dryden A, Allen HA, Henshaw H, Heinrich A. The association between cognitive performance and speech-in-noise perception for adult listeners: A systematic literature review and meta-analysis. Trends in Hearing. 2017;2331216517744675:21.
    1. Edwards B. A model of auditory-cognitive processing and relevance to clinical applicability. Ear and Hearing. 2016;85S:37.
    1. Fiedler L, Ala TS, Graversen C, Alickovic E, Lunner T, Wendt D. Hearing aid noise reduction lowers the sustained listening effort during continuous speech in noise-A combined pupillometry and EEG study. Ear and Hearing. 2021;42(6):1590. doi: 10.1097/AUD.0000000000001050.
    1. Geier, M., et al. (2021). Rtmixer.
    1. Guggenberger, R. (2019). Welcome to LieSL’s documentation.
    1. Han, J. H., & Dimitrijevic, A. (2020). Acoustic change responses to amplitude modulation in cochlear implant users: Relationships to speech perception. Frontiers in Neuroscience, 14, 124. 10.3389/fnins.2020.00124
    1. Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., et al. (2020). Array programming with NumPy. Nature,585, 357–362. 10.1038/s41586-020-2649-2
    1. Heinrich A, Knight S. Reproducibility in cognitive hearing research: Theoretical considerations and their practical application in multi-lab studies. Frontiers in Psychology. 2020;11:1590. doi: 10.3389/fpsyg.2020.01590.
    1. Henry BA, Turner CW, Behrens A, resolution Spectralpeak. Speech recognition in quiet normal hearing, hearing impaired, and cochlear implant listeners. The Journal of the Acoustical Society of America. 2005;118(2):1111. doi: 10.1121/1.1944567.
    1. Hillyer J, Elkins E, Hazlewood C, Watson SD, Arenberg JG, Parbery-Clark A. Assessing cognitive abilities in high-performing cochlear implant users. Frontiers in Neuroscience. 2019;12:1056. doi: 10.3389/fnins.2018.01056.
    1. Kestens K, Degeest S, Keppler H. The effect of cognition on the aided benefit in terms of speech understanding and listening effort obtained with digital hearing aids: A systematic review. American Journal of Audiology. 2021;30(1):190. doi: 10.1044/2020_AJA-20-00019.
    1. Kim SY, Lim JS, Kong IG, Choi HG. Hearing impairment and the risk of neurodegenerative dementia: A longitudinal follow-up study using a national sample cohort. Scientific Reports. 2018;8(1):1.
    1. Kollmeier, B., Gilkey, R.H., & Sieben, U.K. (1988). Adaptive staircase techniques in psychoacoustics: A comparison of human data and a mathematical model. 83(5), 1852. 10.1121/1.396521. Place: United States
    1. Kothe, C. (2019). Pylsl project description.
    1. Kothe, C., et al. (2019a). LabStreamingLayer’s documentation. .
    1. Kothe, C., et al. (2019b). LabStreamingLayer’s documentation.
    1. Leek, M.R. (2001). Adaptive procedures in psychophysical research. 63(8), 1279. 10.3758/BF03194543
    1. Levitt, H. (1971). Transformed up-down methods in psychoacoustics, (Vol. 49). 10.1121/1.1912375
    1. Lin FR, Metter EJ, O’Brien RJ, Resnick SM, Zonderman AB, Ferrucci L. Hearing loss and incident dementia. Archives of Neurology. 2011;68(2):214. doi: 10.1001/archneurol.2010.362.
    1. Livingston G, Huntley J, Sommerlad A, Ames D, Ballard C, Banerjee S, Brayne C, Burns A, Cohen-Mansfield J, Cooper C, et al. Dementia prevention, intervention, and care: 2020 report of the Lancet commission. The Lancet. 2020;396(10248):413. doi: 10.1016/S0140-6736(20)30367-6.
    1. Livingston G, Sommerlad A, Orgeta V, Costafreda SG, Huntley J, Ames D, Ballard C, Banerjee S, Burns A, Cohen-Mansfield J, et al. Dementia prevention, intervention, and care. The Lancet. 2017;390(10113):2673. doi: 10.1016/S0140-6736(17)31363-6.
    1. Mackersie CL, Cones H. Subjective and psychophysiological indexes of listening effort in a competing-talker task. Journal of the American Academy of Audiology. 2011;22(02):113. doi: 10.3766/jaaa.22.2.6.
    1. Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical experiment builder for the social sciences, (Vol. 44. 10.3758/s13428-011-0168-7
    1. Mathôt, S., et al. (2019). OpenSesame 3.2.
    1. McFee, B., et al. (2019). Resampy.
    1. Miles K, McMahon C, Boisvert I, Ibrahim R, De Lissa P, Graham P, Lyxell B. Objective assessment of listening effort: Coregistration of pupillometry and EEG. Trends in Hearing. 2017;21:2331216517706396. doi: 10.1177/2331216517706396.
    1. Moberly AC, Lewis JH, Vasil KJ, Ray C, Tamati TN. Bottom-up signal quality impacts the role of top-down cognitive-linguistic processing during speech recognition by adults with cochlear implants. Otology & Neurotology. 2021;42(10S):S33. doi: 10.1097/MAO.0000000000003377.
    1. Moberly AC, Reed J. Making sense of sentences: Top-down processing of speech by adult cochlear implant users. Journal of Speech Language, and Hearing Research. 2019;62(8):2895. doi: 10.1044/2019_JSLHR-H-18-0472.
    1. Mosnier I, Bebear JP, Marx M, Fraysse B, Truy E, Lina-Granade G, Mondain M, Sterkers-Artières F, Bordure P, Robier A, et al. Improvement of cognitive function after cochlear implantation in elderly patients. JAMA Otolaryngology–Head & Neck Surgery. 2015;141(5):442. doi: 10.1001/jamaoto.2015.129.
    1. Ng EHN, Rudner M, Lunner T, Pedersen MS, Rönnberg J. Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. International Journal of Audiology. 2013;52(7):433. doi: 10.3109/14992027.2013.776181.
    1. Nilsson M, Soli SD, Sullivan JA. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. The Journal of the Acoustical Society of America. 1994;95(2):1085. doi: 10.1121/1.408469.
    1. Peelle JE, effort Listening. How the cognitive consequences of acoustic challenge are reflected in brain and behavior. Ear and Hearing. 2018;39(2):204. doi: 10.1097/AUD.0000000000000494.
    1. Peirce, J., Gray, J.R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Lindeløv, J. K. (2019). PsychoPy2: Experiments in behavior made easy, (Vol. 51). 10.3758/s13428-018-01193-y
    1. Pisoni DB, factors Cognitive, implants cochlear. Some thoughts on perception, learning, and memory in speech perception. Ear and Hearing. 2000;21(1):70. doi: 10.1097/00003446-200002000-00010.
    1. Pisoni, D. B. (2021). Cognitive audiology: An emerging landscape in speech perception. The Handbook of Speech Perception, pp 697–732.
    1. Pisoni DB, Kronenberger WG, Chandramouli SH, Conway CM. Learning and memory processes following cochlear implantation: The missing piece of the puzzle. Frontiers in Psychology. 2016;7:493. doi: 10.3389/fpsyg.2016.00493.
    1. Pisoni DB, Kronenberger WG, Harris MS, Moberly AC. Three challenges for future research on cochlear implants. World Journal of Otorhinolaryngology - Head and Neck Surgery. 2017;3(4):240. doi: 10.1016/j.wjorl.2017.12.010.
    1. Rönnberg J, Lunner T, Ng EHN, Lidestam B, Zekveld AA, Sörqvist P, Lyxell B, Träff U, Yumba W, Classon E, et al. Hearing impairment, cognition and speech understanding: Exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. International Journal of Audiology. 2016;55(11):623. doi: 10.1080/14992027.2016.1219775.
    1. Seifi Ala T, Graversen C, Wendt D, Alickovic E, Whitmer WM, Lunner T. An exploratory study of EEG alpha oscillation and pupil dilation in hearing-aid users during effortful listening to continuous speech. Plos One. 2020;15(7):e0235782. doi: 10.1371/journal.pone.0235782.
    1. Smith GN, Pisoni DB, Kronenberger WG. High-variability sentence recognition in long-term cochlear implant users: Associations with rapid phonological coding and executive functioning. Ear and Hearing. 2019;40(5):1149. doi: 10.1097/AUD.0000000000000691.
    1. Sulas, E., Hasan, P.Y., Zhang, Y., & Patou, F. (2020). Behavior Research Method - OMEXP: Validation data. Mendeley Data.
    1. Tamati TN, Ray C, Vasil KJ, Pisoni DB, Moberly AC. High-and low-performing adult cochlear implant users on high-variability sentence recognition: Differences in auditory spectral resolution and neurocognitive functioning. Journal of the American Academy of Audiology. 2020;31(05):324. doi: 10.3766/jaaa.18106.
    1. Thomson RS, Auduong P, Miller AT, Gurgel RK. Hearing loss as a risk factor for dementia: A systematic review. Laryngoscope Investigative Otolaryngology. 2017;2(2):69. doi: 10.1002/lio2.65.
    1. Völter C, Götze L, Dazert S, Falkenstein M, Thomas JP. Can cochlear implantation improve neurocognition in the aging population? Clinical Interventions in Aging. 2018;13:701. doi: 10.2147/CIA.S160517.
    1. Völter C, Oberländer K, Carroll R, Dazert S, Lentz B, Martin R, Thomas JP. Nonauditory functions in low-performing adult cochlear implant users. Otology & Neurotology. 2021;42(5):e543.
    1. Watson A, Pelli D. QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics. 1983;33:113. doi: 10.3758/BF03202828.
    1. Wiggins IM, Anderson CA, Kitterick PT, Hartley DE. Speech-evoked activation in adult temporal cortex measured using functional near-infrared spectroscopy (fNIRS): Are the measurements reliable? Hearing Research. 2016;339:142. doi: 10.1016/j.heares.2016.07.007.
    1. Winn MB, Wendt D, Koelewijn T, Kuchinsky SE. Best practices and advice for using pupillometry to measure listening effort: An introduction for those who want to get started. Trends in Hearing. 2018;22:2331216518800869. doi: 10.1177/2331216518800869.
    1. Zekveld AA, Koelewijn T, Kramer SE. The pupil dilation response to auditory stimuli: Current state of knowledge. Trends in Hearing. 2018;22:2331216518777174. doi: 10.1177/2331216518777174.
    1. Zeng FG. Trends in cochlear implants. Trends in Amplification. 2004;8(1):1. doi: 10.1177/108471380400800102.

Source: PubMed

3
Prenumerera