Development of an audiovisual speech perception app for children with autism spectrum disorders

Julia Irwin, Jonathan Preston, Lawrence Brancazio, Michael D'angelo, Jacqueline Turcios, Julia Irwin, Jonathan Preston, Lawrence Brancazio, Michael D'angelo, Jacqueline Turcios

Abstract

Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8-10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.

Keywords: Audiovisual app; autism spectrum disorder; speech perception.

Figures

Figure 1
Figure 1
Panel A: Image of the iPad display as seen by the child participant. Along the top panel are the images for the monosyllable words: shock, thumb, fox and socks. Below the imaged words is video of the speaker who produced the target word fox. The frame presented depicts the labiodental contact for the /f/ which helps to visually discriminate the word ‘‘fox’’ from the other words. To the left of the panel is a progress bar to show the child how well he or she is progressing through the task. Panel B: Image of the iPad display after the child participant’s response. Along the top panel is the same set of monosyllabic words: shock, thumb, fox, socks. In this case, the child incorrectly chose the imaged word socks (seen in the final image from the left with an X overlaid across the image of socks). Next to the incorrect choice is the target word fox (overlaid with a smiling face). Due to the error, participants receive feedback to ‘‘Look at the mouth,’’ drawing attention to the mouth of the speaker. To the left of the panel is a progress bar to show the child how well he or she is progressing through the task.
Figure 2
Figure 2
Mean scores on the Auditory Noise Assessment (ANA) at various time points (Time 1, Time 2, Time 3 and Time 4). Each mean consists of at least three consecutive administrations of the ANA. Error bars represent standard deviations.

Source: PubMed

Подписаться