Comparing Biofeedback Types for Children With Residual /ɹ/ Errors in American English: A Single-Case Randomization Design

Nina R Benway, Elaine R Hitchcock, Tara McAllister, Graham Tomkins Feeny, Jennifer Hill, Jonathan L Preston, Nina R Benway, Elaine R Hitchcock, Tara McAllister, Graham Tomkins Feeny, Jennifer Hill, Jonathan L Preston

Abstract

Purpose Research comparing different biofeedback types could lead to individualized treatments for those with residual speech errors. This study examines within-treatment response to ultrasound and visual-acoustic biofeedback, as well as generalization to untrained words, for errors affecting the American English rhotic /ɹ/. We investigated whether some children demonstrated greater improvement in /ɹ/ during ultrasound or visual-acoustic biofeedback. Each participant received both biofeedback types. Individual predictors of treatment response (i.e., age, auditory-perceptual skill, oral somatosensory skill, and growth mindset) were also explored. Method Seven children ages 9-16 years with residual rhotic errors participated in 10 treatment visits. Each visit consisted of two conditions: 45 min of ultrasound biofeedback and 45 min of visual-acoustic biofeedback. The order of biofeedback conditions was randomized within a single-case experimental design. Acquisition of /ɹ/ was evaluated through acoustic measurements (normalized F3-F2 difference) of selected nonbiofeedback productions during practice. Generalization of /ɹ/ was evaluated through acoustic measurements and perceptual ratings of pretreatment/posttreatment probes. Results Five participants demonstrated acquisition of practiced words during the combined treatment package. Three participants demonstrated a clinically significant degree of generalization to untreated words on posttreatment probes. Randomization tests indicated one participant demonstrated a significant advantage for visual-acoustic over ultrasound biofeedback. Participants' auditory-perceptual acuity on an /ɹ/-/w/ identification task was identified as a possible correlate of generalization following treatment. Conclusions Most participants did not demonstrate a statistically significant difference in acoustic productions between the ultrasound and visual-acoustic conditions, but one participant showed greater improvement in /ɹ/ during visual-acoustic biofeedback. Supplemental Material https://doi.org/10.23641/asha.14881101.

Figures

Figure 1.
Figure 1.
Examples of correct and distorted /ɹ/ during biofeedback. Panels A and B show the linear predictive coding spectrum as seen during treatment. Each image has a template in red representing a “good” /ɹ/ for that individual's age and gender. Panel A shows a perceptually correct /ɹ/, while Panel B shows a distorted /ɹ/ (third formant is too high). Panels C and D show the ultrasound display as seen during treatment, with the white line showing the dorsal surface of the tongue. The tongue shape template is not visible in these images. Panel C shows a perceptually correct “bunched” /ɹ/ (a high tongue blade, a low tongue body [dorsum], and tongue root retraction). Panel D shows a distorted /ɹ/ with a high tongue body (dorsum).
Figure 2.
Figure 2.
Study design and treatment methodology. Panel A shows the rapidly alternating single-case randomized block design and a hypothetical condition order for illustrative purposes. For each participant, the order of biofeedback presentation was randomized to the two treatment conditions within each of the 10 visits (i.e., the statistical blocking unit). Research Question 1 concerned the between-series, within-subject comparison of performance on trained words to measure if some subjects demonstrated greater motor acquisition to one biofeedback condition or the other. Research Question 2 compared performance on untrained words during the three pretreatment evaluation probes to the three posttreatment probes to measure generalization following the combined treatment program. Panel B shows the structure of the treatment program, along with group-level averages of achieved dosage. US = ultrasound; VA = visual-acoustic; KP = knowledge of performance; KR = knowledge of results; NF = No feedback.
Figure 3.
Figure 3.
Time series line graphs comparing the normalized within-condition F3–F2 distance for each subject. Perceptually correct /ɹ/ productions have lower F3–F2 values. F3–F2 distance is measured in Hertz, but as a z-standardized score, the y-axis for age- and gender-normalized F3–F2 is unitless. Test statistics and randomization test p values are provided for each subject. US = ultrasound; VA = visual-acoustic.
Figure 4.
Figure 4.
Participants demonstrating significant generalization from pretreatment to posttreatment. Graphs representing change from baseline to posttreatment on acoustic and perceptual measures for three participants judged to demonstrate generalization to untrained words. The coefficient and significance of participant-level random slopes is provided, as well as effect size measures of pre-to-post change. For acoustic response (left side), lower plotted values represent a more adultlike production. F3–F2 distance is measured in Hertz, but as a z-standardized score, the y-axis for age- and gender-normalized F3–F2 is unitless. The perceptual ratings (right side) reflect the average of three listeners' perceptual judgment: 1 = unanimously correct, 0 = unanimously incorrect. Bars represent standard deviations. Perceptual points have been jittered to prevent overlap obscuring data visualization.
Figure 5.
Figure 5.
Participants not demonstrating generalization from pretreatment to posttreatment. Graphs representing change from baseline to posttreatment on acoustic and perceptual measures for four participants judged to show no generalization to untrained words. The coefficient and significance of participant-level random slopes is provided, as well as effect size measures of pre-to-post change. For acoustic response (left side), lower plotted values represent a more adultlike production. F3–F2 distance is measured in Hertz, but as a z-standardized score, the y-axis for age- and gender-normalized F3–F2 is unitless. The perceptual ratings (right side) reflect the average of three listeners' perceptual judgment: 1 = unanimously correct, 0 = unanimously incorrect. Bars represent standard deviations. Perceptual points have been jittered to prevent overlap obscuring data visualization.

Source: PubMed

3
Abonnieren