The Relationship Between Spectral Modulation Detection and Speech Recognition: Adult Versus Pediatric Cochlear Implant Recipients

René H Gifford, Jack H Noble, Stephen M Camarata, Linsey W Sunderhaus, Robert T Dwyer, Benoit M Dawant, Mary S Dietrich, Robert F Labadie, René H Gifford, Jack H Noble, Stephen M Camarata, Linsey W Sunderhaus, Robert T Dwyer, Benoit M Dawant, Mary S Dietrich, Robert F Labadie

Abstract

Adult cochlear implant (CI) recipients demonstrate a reliable relationship between spectral modulation detection and speech understanding. Prior studies documenting this relationship have focused on postlingually deafened adult CI recipients-leaving an open question regarding the relationship between spectral resolution and speech understanding for adults and children with prelingual onset of deafness. Here, we report CI performance on the measures of speech recognition and spectral modulation detection for 578 CI recipients including 477 postlingual adults, 65 prelingual adults, and 36 prelingual pediatric CI users. The results demonstrated a significant correlation between spectral modulation detection and various measures of speech understanding for 542 adult CI recipients. For 36 pediatric CI recipients, however, there was no significant correlation between spectral modulation detection and speech understanding in quiet or in noise nor was spectral modulation detection significantly correlated with listener age or age at implantation. These findings suggest that pediatric CI recipients might not depend upon spectral resolution for speech understanding in the same manner as adult CI recipients. It is possible that pediatric CI users are making use of different cues, such as those contained within the temporal envelope, to achieve high levels of speech understanding. Further investigation is warranted to investigate the relationship between spectral and temporal resolution and speech recognition to describe the underlying mechanisms driving peripheral auditory processing in pediatric CI users.

Keywords: cochlear implant; hearing loss; spectral modulation detection; spectral resolution; speech recognition.

Figures

Figure 1.
Figure 1.
Individual data for monosyllabic word recognition as a function of spectral modulation detection using the QSMD test, both in percent correct. The vertical dashed line represents chance performance on the QSMD task. Sample sizes for the postlingual adults, prelingual adults, and prelingual pediatric CI recipients are 477, 65, and 36, respectively. Solid gray lines represent the linear regression function for each panel. Pearson’s correlation coefficients and associated p values are displayed in each panel. QSMD = quick spectral modulation detection; RAU = rationalized arcsine units.
Figure 2.
Figure 2.
Individual data for sentence recognition, in quiet, as a function of spectral modulation detection using the QSMD test, both in percent correct. The vertical dashed line represents chance performance on the QSMD task. Sample sizes for the postlingual adults, prelingual adults, and prelingual pediatric CI recipients are 456, 59, and 36, respectively. Solid gray lines represent the linear regression function for each panel. Pearson’s correlation coefficients and associated p values are displayed in each panel. QSMD = quick spectral modulation detection; RAU = rationalized arcsine units.
Figure 3.
Figure 3.
Individual data for sentence recognition in noise (+5 dB SNR) as a function of spectral modulation detection using the QSMD test, both in percent correct. The vertical dashed line represents chance performance on the QSMD task. Sample sizes for the postlingual adults, prelingual adults, and prelingual pediatric CI recipients are 334, 43, and 22, respectively. Solid gray lines represent the linear regression function for each panel. Pearson’s correlation coefficients and associated p values are displayed in each panel. QSMD = quick spectral modulation detection; RAU = rationalized arcsine units.

References

    1. Abdala C., Folsom R. C. (1995) The development of frequency resolution in humans as revealed by the auditory brain-stem response recorded with notched-noise masking. Journal of the Acoustical Society of America 98(2 Pt 1): 921–930.
    1. Allen, P., Wightman, F., Kistler, D., & Dolan, T. (1989). Frequency resolution in children. Journal of Speech and Heanng Research, 32, 317–322.
    1. Anderson, E.S., Nelson, D.A., Kreft, H., Nelson, P. B., & Oxenham, A. O. (2011). Comparing spatial tuning curves, spectral ripple resolution, and speech perception in cochlear implant users. Journal of the Acoustical Society of America, 130(1), 364–375. DOI: 10.1121/1.3589255.
    1. Anderson E. S., Oxenham A. J., Nelson P. B., Nelson D. A. (2012) Assessing the role of spectral and intensity cues in spectral ripple detection and discrimination in cochlear-implant users. Journal of the Acoustical Society of America 132(6): 3925–3934. DOI: 10.1121/1.4763999.
    1. Baker, M., Buss, E., Jacks, A., Taylor, C., & Leibold, L. J. (2014). Children's perception of speech produced in a two-talker background. Journal of Speech, Language, and Hearing Research, 57(1), 327–337. DOI: 10.1044/1092-4388(2013/12-0287.
    1. Benichov J., Cox L. C., Tun P. A., Wingfield A. (2012) Word recognition within a linguistic context: Effects of age, hearing acuity, verbal ability and cognitive function. Ear & Hearing 32: 250–256. DOI: 10.1097/AUD.0b013e31822f680f.
    1. Bernstein, L.R., & Green, D. M. (1988). Detection of changes in spectral shape: uniform vs. non-uniform background spectra. Hearing Research, 34(2), 157–165.
    1. Berenstein C. K., Mens L. H., Mulder J. J., Vanpoucke F. J. (2008) Current steering and current focusing in cochlear implants: Comparison of monopolar, tripolar, and virtual channel electrode configurations. Ear Hear 29(2): 250–260.
    1. Bernstein J. G. W., Mehraei G., Shamma S., Gallun F. J., Theodoroff S. M., Leek M. R. (2013) Spectrotemporal modulation sensitivity as a predictor of speech intelligibility for hearing-impaired listeners. Journal of the American Academy of Audiology 24(4): 293–306. DOI: 10.3766/jaaa.24.4.5.
    1. Bierer, J. A., & Litvak, L. (2016). Reducing channel interaction through cochlear implant programming may improve speech perception: Current focusing and channel deactivation. Trends in Hearing, 20. DOI: 10.1177/2331216516653389.
    1. Bonnet R. M., Frijns J. H., Peeters S., Briaire J. J. (2004) Speech recognition with a cochlear implant using triphasic charge-balanced pulses. Acta Otolaryngologica 124(4): 371–375.
    1. Brysbaert M., New B. (2009) Moving beyond Kucera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavioral Research Methods 41: 977–990. DOI: 10.3758/BRM.41.4.977.
    1. Burns E. M., Viemeister N. F. (1976) Nonspectral pitch. Journal of the Acoustical Society of America 60: 863–869.
    1. Burns E. M., Viemeister N. F. (1981) Played-again SAM: Further observation on the pitch of amplitude-modulated noise. Journal of the Acoustical Society of America 70: 1655–1660. DOI: 10.1159/000351302.
    1. Buss, E., Leibold, L. J., & Hall, J. W. (2016). Effect of response context and masker type on word recognition in school-age children and adults. Journal of the Acoustical Society of America, 140(2), 968–977. DOI: 10.1044/1092-4388(2013/12-0287.
    1. Buss E., Leibold L. J., Porter H. L., Grose J. H. (2017) Speech recognition in one- and two-talker maskers in school-age children and adults: Development of perceptual masking and glimpsing. Journal of the Acoustical Society of America 141(4): 2650–2660. DOI: 10.1121/1.4979936.
    1. Chang Y. T., Yang H. M., Lin Y. H., Liu S. H., Wu J. L. (2009) Tone discrimination and speech perception benefit in Mandarin-speaking children fit with HiRes fidelity 120 sound processing. Otology & Neurotology 30(6): 750–757.
    1. Cohen J. (1988) Statistical power analysis for the behavioral sciences, 2nd ed New York, NY: Lawrence Erlbaum Associates.
    1. Corbin, N. E., Bonino, A. Y., Buss, E. & Leibold, L. J. (2016). Development of open-set word recognition in children: Speech-shaped noise and two-talker speech maskers. Ear Hear, 37, 55–63. DOI: 10.1097/AUD.0000000000000201.
    1. Davies-Venn E., Nelson P., Souza P. (2015) Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing. Journal of the Acoustical Society of America 138(1): 492–503. DOI: 10.1121/1.4922700.
    1. Dorman M. F., Spahr A., Gifford R. H., Cook S., Zhang T., Loiselle L., Schramm D. (2012) Current research with cochlear implants at Arizona State University. Journal of the American Academy of Audiology 23(6): 385–395. DOI: 10.3766/jaaa.23.6.2.
    1. Dowell R. C., Mecklenburg D. J., Clark G. M. (1986) Speech recognition for 40 patients receiving multichannel cochlear implants. Archives of Otolaryngology–Head & Neck Surgery 112(10): 1054–1059.
    1. Drennan W. R., Anderson E. S., Won J. H., Rubinstein J. T. (2014) Validation of a clinical assessment of spectral-ripple resolution for cochlear implant users. Ear and Hearing 35(3): e92–e98. DOI: 10.1097/AUD.0000000000000009.
    1. Drennan W. R., Won J. H., Nie K., Jameyson E., Rubinstein J. T. (2010) Sensitivity of psychophysical measures to signal processor modifications in cochlear implant users. Hearing Research 262(1–2): 1–8. DOI: 10.1016/j.heares.2010.02.003.
    1. Dunn, L. M., & Dunn, D. M. (2007). Peabody picture vocabulary test, fourth edition (PPVT™-4). Pearson Clinical, Bloomington, MN.
    1. Dunn O. J. (1964) Multiple comparisons using rank sums. Technometrics 6: 241–252.
    1. Elliot L. L. (1979) Performance of children aged 9 to 17 years on a test of speech intelligibility in noise using sentence material with controlled word predictability. Journal of the Acoustical Society of America 66: 651–653.
    1. Fechner G. T. (1860) Elemente der Psychophysik. Breitkopf & Härtel, Leipzig, New York, NY: Holt, Rinehart & Winston. (Reprinted in 1964, by Bonset, Amsterdam; English translation by H.E. Adler, 1966).
    1. Fishman K. E., Shannon R. V., Slattery W. H. (1997) Speech recognition as a function of the number of electrodes used in the SPEAK cochlear implant speech processor. Journal of Speech Language and Hearing Research 40(5): 1201–1215.
    1. Friesen L. M., Shannon R. V., Baskent D., Wang X. (2001) Speech recognition in noise as a function of the number of spectral channels: Comparison of acoustic hearing and cochlear implants. Journal of the Acoustical Society of America 110(2): 1150–1163.
    1. Friesen L. M., Shannon R. V., Cruz R. J. (2005) Effects of stimulation rate on speech recognition with cochlear implants. Audiology and Neurotology 10(3): 169–184.
    1. Garadat S. N., Zwolan T. A., Pfingst B. E. (2013) Using temporal modulation sensitivity to select stimulation sites for processor maps in cochlear implant listeners. Audiology and Neurotology 18: 247–260. DOI: 10.1159/000351302.
    1. Gescheider G. (1997) Chapter 3: The classical psychophysical methods. Psychophysics: The fundamentals, 3rd ed New York, NY: Lawrence Erlbaum Associates.
    1. Gifford R. H., Hedley-Williams A., Spahr A. J. (2014) Clinical assessment of spectral modulation detection for adult cochlear implant recipients: A non-language based measure of performance outcomes. International Journal of Audiology 53(3): 159–164. DOI: 10.3109/14992027.2013.851800.
    1. Hall J. W., III, Grose J. H. (1991) Notched-noise measures of frequency selectivity in adults and children using fixed-masker-level and fixed-signal-level presentation. Journal of Speech and Hearing Research 34(3): 651–660.
    1. Han D., Liu B., Zhou N., Chen X., Kong Y., Liu H., Xu L. (2009) Lexical tone perception with HiResolution and HiResolution 120 sound-processing strategies in pediatric Mandarin-speaking cochlear implant users. Ear & Hearing 30(2): 169–177. DOI: 10.1097/AUD.0b013e31819342cf.
    1. Henry B. A., Turner C. W. (2003) The resolution of complex spectral patterns by cochlear implant and normal-hearing listeners. Journal of the Acoustical Society of America 113(5): 2861–2873.
    1. Henry B. A., Turner C. W., Behrens A. (2005) Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners. Journal of the Acoustical Society of America 118(2): 1111–1121.
    1. Holder, J. T., Sheffield, S. W., & Gifford, R. H. (2016). Speech understanding in children with normal hearing: Sound field normative data for BabyBio, BKB-SIN, and QuickSIN. Otology & Neurotology, 37(2), e50–55. DOI: 10.1097/MAO.0000000000000907.
    1. Horn D. L., Dudley D. J., Dedhia K., Nie K., Drennan W. R., Won J. H., Werner L. A. (2017) Effects of age and hearing mechanism on spectral resolution in normal hearing and cochlear-implanted listeners. Journal of the Acoustical Society of America 141: 613–623. DOI: 10.1121/1.4974203.
    1. Horn K. L., McMahon N. B., McMahon D. C., Lewis J. S., Barker M., Gherini S. (1991) Functional use of the Nucleus 22-channel cochlear implant in the elderly. Laryngoscope 101(3): 284–288.
    1. Hsiao F. (2008) Mandarin melody recognition by pediatric cochlear implant recipients. Journal of Music Therapy 45(4): 390–404.
    1. Irwin, R. J., Stillman, J. A., & Schade, A. (1986). The width of the auditory filter in children. Journal of Experimental Child Psychology, 41, 429–442.
    1. Jeon E. K., Turner C. W., Karsten S. A., Henry B. A., Gantz B. J. (2015) Cochlear implant users’ spectral ripple resolution. Journal of the Acoustical Society of America 138(4): 2350–2358.
    1. Jung K. H., Won J. H., Drennan W. R., Jameyson E., Miyasaki G., Norton S. J., Rubinstein J. T. (2012) Psychoacoustic performance and music and speech perception in prelingually deafened children with cochlear implants. Audiology and Neurotology 17(3): 189–197. DOI: 10.1159/000336407.
    1. Kang R., Nimmons G. L., Drennan W., Longnion J., Ruffin C., Nie K., Rubinstein J. (2009) Development and validation of the University of Washington Clinical Assessment of Music Perception Test. Ear and Hearing 30(4): 411–418.
    1. Kirby B. J., Browning J. M., Brennan M. A., Spratford M., McCreery R. W. (2015) Spectro-temporal modulation detection in children. Journal of the Acoustical Society of America 138: EL465–EL468. DOI: 10.1121/1.4935081.
    1. Kirk K. I., Pisoni D. B., Osberger M. J. (1995) Lexical effects on spoken word recognition by pediatric cochlear implant users. Ear & Hearing 16(5): 470–481.
    1. Klein K. E., Walker E. A., Kirby B., McCreery R. W. (2017) Vocabulary facilitates speech perception in children with hearing aids. Journal of Speech, Language, and Hearing Research 60(8): 2281–2296. DOI: 10.1044/2017_JSLHR-H-16-0086.
    1. Koch, D. B., Downing, M., Osberger, M. J., & Litvak, L. (2007). Using current steering to increase spectral resolution in CII and HiRes 90K users. Ear and Hearing, 28(2 Suppl), 38S–41S. DOI: 10.1097/AUD.0b013e31803150de.
    1. Labadie R. F., Noble H., Hedley-Williams J., Sunderhaus Z. W., Dawant M., Gifford Z. H. (2016) Results of postoperative, CT-based, electrode deactivation on hearing in prelingually deafened adult cochlear implant recipients. Otology & Neurotology 37: 137–145. DOI: 10.1097/MAO.0000000000000926.
    1. Landsberger D. M., Padilla M., Martinez A. S., Eisenberg L. S. (2018) Spectral-temporal modulated ripple discrimination by children with cochlear implants. Ear and Hearing 39(1): 60–68. DOI: 10.1097/AUD.0000000000000463.
    1. Lau B. K., Werner L. A. (2012) Perception of missing fundamental pitch by 3- and 4-month old human infants. Journal of the Acoustical Society of America 132(6): 3874–3882. DOI: 10.1121/1.4763991.
    1. Lee K. Y., van Hasselt C. A., Chiu S. N., Cheung D. M. (2002) Cantonese tone perception ability of cochlear implant children in comparison with normal-hearing children. International Journal of Pediatric Otorhinolaryngology 63(2): 137–147.
    1. Litvak L. M., Spahr A. J., Saoji A. A., Fridman G. Y. (2007) Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners. Journal of the Acoustical Society of America 122(2): 982–991. DOI: 10.1121/1.2749413.
    1. Lorens A., Zgoda M., Obrycka A., Skarzynski H. (2010. a) Fine structure processing improves speech perception as well as objective and subjective benefits in pediatric MED-EL COMBI 40+ users. International Journal of Pediatric Otorhinolaryngology 74(12): 1372–1978.
    1. Lorens A., Zgoda M., Skarzynski H. (2010. b) Speech perception and subjective benefit in paediatric C40+ users after the upgrade to fine structure processing (FSP). Cochlear Implants International 11(Suppl 1): 444–448.
    1. Maillet C. J., Tyler R. S., Jordan H. N. (1995) Change in the quality of life of adult cochlear implant patients. The Annals of Otology, Rhinology & Laryngology. Supplement 165: 31–48.
    1. McCreery, R., Ito, R., Spratford, M., Lewis, D., Hoover, B., et al. (2010). Performance-intensity functions for normal-hearing adults and children using computer-aided speech perception assessment. Ear Hear, 31, 95–101. DOI: 10.1097/AUD.0b013e3181bc7702.
    1. McCreery, R. W., & Stelmachowicz, P. G. (2011). Audibility-based predictions of speech recognition for children and adults with normal hearing. Journal of the Acoustical Society of America, 130(6), 4070–4081. DOI: 10.1121/1.3658476.
    1. McCreery, R. W., Spratford, M., Kirby, B., & Brennan, M. (2017). Individual differences in language and working memory affect children's speech recognition in noise. International Journal of Audiology, 56(5), 306–315. DOI: 10.1080/14992027.2016.1266703.
    1. Melo T. M., Bevilacqua M. C., Costa O. A., Moret A. L. (2013) Influence of signal processing strategy in auditory abilities. Brazilian Journal of Otorhinolaryngology 79(5): 629–635. DOI: 10.5935/1808-8694.20130113.
    1. Miller G. A., Heise G. A., Lichten W. (1951) The intelligibility of speech as a function of the context of the test materials. Journal of Experimental Psychology 41: 329–335.
    1. Mlot, S., Buss, E., & Hall, J. W. (2010). Spectral integration and bandwidth effects on speech recognition in school-aged children and adults. Ear and Hearing, 31(1), 56–62. DOI: 10.1097/AUD.0b013e3181ba746b.
    1. Montgomery C. R., Clarkson M. G. (1997) Infants’ pitch perception: Masking by low- and high-frequency noises. Journal of the Acoustical Society of America 102: 3665–3672.
    1. Moore D. R., Cowan J. A., Riley A., Edmondson-Jones A. M., Ferguson M. A. (2011) Development of auditory processing in 6- to 11-yr-old children. Ear and Hearing 32(3): 269–285. DOI: 10.1097/AUD.0b013e318201c468.
    1. Moulin A., Richard C. (2015) Lexical influences on spoken spondaic word recognition in hearing-impaired patients. Frontiers in Neuroscience 9: 476 DOI: 10.3389/fnins.2015.00476.
    1. Nittrouer S., Boothroyd A. (1990) Context effects in phoneme and word recognition by young children and older adults. Journal of the Acoustical Society of America 87: 2705–2715.
    1. Noble J. H., Gifford R. H., Hedley-Williams A. J., Dawant B. M., Labadie R. F. (2014) Clinical evaluation of an image-guided cochlear implant programming strategy. Audiology and Neurotology 19(6): 400–411. DOI: 10.1159/000365273.
    1. Noble J. H., Hedley-Williams A. J., Sunderhaus L., Dawant B. M., Labadie R. F., Camarata S. M., Gifford R. H. (2016) Initial results with image-guided cochlear implant programming in children. Otology & Neurotology 37(2): e63–e69. DOI: 10.1159/000365273.
    1. Noble J. H., Labadie R. F., Gifford R. H., Dawant B. M. (2013) Image-guidance enables new methods for customizing cochlear implant stimulation strategies. IEEE Transactions on Neural Systems and Rehabilitation Engineering 21(5): 820–829. DOI: 10.1109/TNSRE.2013.2253333.
    1. Olszewski C., Gfeller K., Froman R., Stordahl J., Tomblin B. (2005) Familiar melody recognition by children and adults using cochlear implants and normal hearing children. Cochlear Implants International 6(3): 123–140.
    1. Peng S. C., Tomblin J. B., Cheung H., Lin Y. S., Wang L. S. (2004) Perception and production of mandarin tones in prelingually deaf children with cochlear implants. Ear & Hearing 25(3): 251–264.
    1. Peterson G. E., Lehiste I. (1962) Revised CNC lists for auditory tests. Journal of Speech and Hearing Disorders 27: 62–70.
    1. Riss D., Hamzavi J.-S., Katzinger M., Baumgartner W. D., Kaider A., Gstoettner W., Arnolder C. (2011) Effects of fine structure and extended low frequencies in pediatric cochlear implant recipients. International Journal of Pediatric Otorhinolaryngology 75(4): 573–578. DOI: 10.1016/j.ijporl.2011.01.022.
    1. Roid G. H., Miller L. J., Pomplun M., Koch C. (2013) Leiter international performance scale, third edition, Wood Dale, IL: Stoelting.
    1. Rosen S. (1992) Temporal information in speech: Acoustic, auditory, and linguistic aspects. Philosophical Transactions: Biological Sciences 336(1278): 367–373.
    1. Saoji A. A., Eddins D. A. (2007) Spectral modulation masking patterns reveal tuning to spectral envelope frequency. Journal of the Acoustical Society of America 122(2): 1004–1013. DOI: 10.1121/1.2751267.
    1. Saoji A. A., Litvak L., Spahr A. J., Eddins D. A. (2009) Spectral modulation detection and vowel and consonant identifications in cochlear implant listeners. Journal of the Acoustical Society of America 126(3): 955–958. DOI: 10.1121/1.3179670.
    1. Sheffield S. W., Simha M., Jahn K. N., Gifford R. H. (2016) The effects of acoustic bandwidth on simulated bimodal benefit in children and adults with normal hearing. Ear and Hearing 37(3): 282–288. DOI: 10.1097/AUD.0000000000000281.
    1. Shim H. J., Won J. H., Moon I. J., Anderson E. S., Drennan W. R., McIntosh N. E., Weaver E. M., Rubinstein J. T. (2014) Can unaided non-linguistic measures predict cochlear implant candidacy? Otology & Neurotology 35(8): 1345–1353. DOI: 10.1097/MAO.0000000000000323.
    1. Skinner M. W., Clark G. M., Whitford L. A., Seligman P. M., Staller S. J., Shipp D. B., Arndt P. L. (1994) Evaluation of a new spectral peak coding strategy for the Nucleus 22 Channel Cochlear Implant System. American Journal of Otology 15(Suppl 2): 15–27.
    1. Smith Z. M., Parkinson W. S., Long C. J. (2013) Multipolar current focusing increases spectral resolution in cochlear implants. IEEE Engineering in Medicine and Biology Society 2013: 2796–2799. DOI: 10.1109/EMBC.2013.6610121.
    1. Spahr A. J., Dorman M. F., Litvak L. M., Cook S. J., Loiselle L. M., Dejong M. D., Gifford R. H. (2014) Development and validation of the pediatric AzBio sentence lists. Ear and Hearing 35(4): 418–422. DOI: 10.1097/AUD.0000000000000031.
    1. Spahr A. J., Dorman M. F., Litvak L. M., Van Wie S., Gifford R. H., Loizou P. C., Cook S. (2012) Development and validation of the AzBio sentence lists. Ear and Hearing 33(1): 112–117. DOI: 10.1097/AUD.0b013e31822c2549.
    1. Spetner N. B., Olsho L. W. (1990) Auditory frequency resolution in human infancy. Child Development 61(3): 632–652.
    1. Srinivasan A. G., Padilla M., Shannon R. V., Landsberger D. M. (2013) Improving speech perception in noise with current focusing in cochlear implant users. Hearing Research 299: 29–36. DOI: 10.1016/j.heares.2013.02.004.
    1. Stelmachowicz, P. G., Pittman, A. L., Hoover, B. M., & Lewis, D. E. (2001). Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. Journal of the Acoustical Society of America, 110(4), 2183–2190.
    1. Stuart A. (2005) Development of auditory temporal resolution in school-age children revealed by word recognition in continuous and interrupted noise. Ear and Hearing 26: 78–88.
    1. Studebaker G. A. (1985) A “rationalized” arcsine transform. Journal of Speech and Hearing Research 28(3): 455–462.
    1. van Tasell D. J., Soli S. D., Kirby V. M., Widin G. P. (1987) Speech waveform envelope cues for consonant recognition. Journal of the Acoustical Society of America 82(4): 1152–1161.
    1. Uhler K. L., Warner-Czyz A., Gifford R. H., PMSTB Working Group. (2017) Pediatric minimum speech test battery. Journal of the American Academy of Audiology 28(3): 232–247. DOI: 10.3766/jaaa.15123.
    1. Vitevitch M. S., Luce P. A., Charles-Luce J., Kemmerer D. (1997) Phonotactics and syllable stress: Implications for the processing of spoken nonsense words. Language and Speech 40: 47–62.
    1. Vitevitch M. S., Luce P. A., Pisoni D. B., Auer E. T. (1999) Phonotactics, neighborhood activation, and lexical access for spoken words. Brain and Language 68(1–2): 306–311.
    1. Werner L. A. (1996) The development of auditory behavior (or what the anatomists and physiologists have to explain). Ear and Hearing 17(5): 438–446.
    1. Wilson B. S., Finley C., Lawson D. T., Wolford R. D., Eddington D. K., Rabinowitz W. M. (1991) Better speech recognition with cochlear implants. Nature 352(6332): 236–238.
    1. Winn M. B., Won J. H., Moon I. J. (2016) Assessment of spectral and temporal resolution in cochlear implant users using psychoacoustic discrimination and speech cue categorization. Ear and Hearing 37: e377–e390. DOI: 10.1097/AUD.0000000000000328.
    1. Won J. H., Drennan W. R., Kang R. S., Rubinstein J. T. (2010) Psychoacoustic abilities associated with music perception in cochlear implant users. Ear and Hearing 31: 796–805. DOI: 10.1097/AUD.0b013e3181e8b7bd.
    1. Won J. H., Drennan W. R., Rubinstein J. T. (2007) Spectral-ripple resolution correlates with speech reception in noise in cochlear implant users. JARO—Journal of the Association for Research in Otolaryngology 8(3): 384–392. DOI: 10.1007/s10162-007-0085-8.
    1. Won, J. H., Nie, K., Drennan, W. R., & Rubinstein, J. T. (2012). Maximizing the spectral and temporal benefits of two clinically used sound processing strategies for cochlear implants. Trends in Amplification, 16(4), 201–210. DOI: 10.1177/1084713812467855.
    1. Won J. H., Moon I. J., Jin S., Park H., Woo J. (2015) Spectrotemporal modulation detection and speech perception by cochlear implant users. PLoS One 10(10): 1–24. DOI: 10.1371/journal.pone.0140920.
    1. Yuan M., Lee T., Yuen K. C., Soli S. D., van Hasselt C. A., Tong M. C. (2009) Cantonese tone recognition with enhanced temporal periodicity cues. Journal of the Acoustical Society of America 126(1): 327–337. DOI: 10.1121/1.3117447.
    1. Zhang T., Spahr A. J., Dorman M. F., Saoji A. (2013) Relationship between auditory function of nonimplanted ears and bimodal benefit. Ear and Hearing 34(2): 133–141. DOI: 10.1097/AUD.0b013e31826709af.
    1. Zhou N. (2016) Monopolar detection thresholds predict spatial selectivity of neural excitation in cochlear implants: Implications for speech recognition. PLoS One 11(10): E0165476 DOI: 10.1371/journal.pone.0165476.
    1. Zhou N. (2017) Deactivating stimulation sites based on low-rate thresholds improves spectral ripple and speech reception thresholds in cochlear implant users. Journal of the Acoustical Society of America 141(3): EL243–EL248. DOI: 10.1121/1.4977235.

Source: PubMed

3
Abonnieren