Combining acoustic and electric stimulation in the service of speech recognition

Michael F Dorman, Rene H Gifford, Michael F Dorman, Rene H Gifford

Abstract

The majority of recently implanted, cochlear implant patients can potentially benefit from a hearing aid in the ear contralateral to the implant. When patients combine electric and acoustic stimulation, word recognition in quiet and sentence recognition in noise increase significantly. Several studies suggest that the acoustic information that leads to the increased level of performance resides mostly in the frequency region of the voice fundamental, e.g. 125 Hz for a male voice. Recent studies suggest that this information aids speech recognition in noise by improving the recognition of lexical boundaries or word onsets. In some noise environments, patients with bilateral implants can achieve similar levels of performance as patients who combine electric and acoustic stimulation. Patients who have undergone hearing preservation surgery, and who have electric stimulation from a cochlear implant and who have low-frequency hearing in both the implanted and not-implanted ears, achieve the best performance in a high noise environment.

Conflict of interest statement

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

Figures

Figure 1
Figure 1
Left: audiogram for the contralateral ear of EAS patients. Right: CNC word recognition in acoustic only, electric (CI) only, and combined electric and acoustic (EAS) conditions (from Dorman et al, 2008). A = acoustic stimulation; E = electric stimulation; E+A = electric plus acoustic stimulation. Error bars indicate +1 standard deviation.
Figure 2
Figure 2
Sentence recognition by EAS patients in acoustic only, electric (CI) only, and combined electric and acoustic (EAS) conditions (from Zhang et al, 2010). A = acoustic stimulation; E = electric stimulation; E + A= electric plus acoustic stimulation. Error bars indicate +1 standard deviation.
Figure 3
Figure 3
CNC word recognition (top), and AzBio sentence recognition at +10 dB SNR (bottom) for EAS patients in acoustic alone, electric alone, and EAS conditions. In the acoustic only and EAS conditions the acoustic signal was either wideband or low-pass (LP) filtered at 125, 250, 500, and 750 Hz. Error bars indicate +1 standard deviation. (From Zhang et al, 2010).
Figure 4
Figure 4
Sentence recognition in noise for EAS patients in an E-alone condition and in two EAS conditions. In one EAS condition (E+WB), the acoustic signal was wideband. In the other EAS condition (E+Sine), the acoustic signal was an amplitude and frequency modulated sine wave that tracked the F0 of the original sentence. Error bars indicate +1 standard deviation. Adapted from Brown and Bacon (2009) with permission.
Figure 5
Figure 5
CNC word recognition by bilateral CI patients (n = 82) and EAS patients (n = 25). The mean score for each group is indicated by a horizontal line.
Figure 6
Figure 6
Threshold (dB SNR) for unilateral, bilateral, and EAS (bimodal) patients tested in the R-SPACE TM environment. Error bars indicate +1 standard deviation.
Figure 7
Figure 7
Top: threshold (dB SNR) for hearing preservation patients (n = 8) with bimodal and combined stimulation. Bottom left: mean pre- and post-implant audiograms for the implanted ear. Bottom right: mean audiogram for the ear contralateral to the implant. Error bars indicate ±1 standard deviation.

Source: PubMed

3
Subskrybuj