2014 AG Bell Research Symposium

Page 1

2014 Research Symposium Sunday, June 29 8:00 a.m. – 11:30 a.m. Walt Disney World Swan and Dolphin Orlando, Florida

MAXIMIZING BRAIN ADAPTABILITY Enhancing Listening for Language Development, Speech Perception and Music Appreciation


1

2014 RESEARCH SYMPOSIUM

2

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

IMPROVING AUDIBILITY: THE FOUNDATION FOR SPEECH UNDERSTANDING Pamela Souza, Ph.D., Northwestern University, School of Communication

6 10

IMPROVING AUDITORY SKILLS THROUGH TRAINING Beverly Wright, Ph.D., Northwestern University, School of Communication

SPOKEN LANGUAGE DEVELOPMENT IN CHILDREN RECEIVING COCHLEAR IMPLANTS Emily Tobey, Ph.D., University of Texas at Dallas, School of Behavioral and Brain Sciences

14

MUSIC ENJOYMENT AND COCHLEAR IMPLANT RECIPIENTS: OVERCOMING OBSTACLES AND HARNESSING CAPABILITIES Kate Gfeller, Ph.D., University of Iowa, School of Music

MODERATOR Lyn Robertson, Ph.D. Lyn Robertson is Associate Professor Emerita at the Department of Education of Denison University in Granville, Ohio, where she began teaching in 1979. She is the former Director for the J. W. Alford Center for Service Learning at Denison. She began her career teaching seventh grade English, where she discovered students achieving at low levels in reading and writing. This led her to extensive study of literacy, particularly within linguistic, cognitive, and social frameworks. Robertson is the immediate past president of the AG Bell Academy for Listening and Spoken Language.

This guide is provided as an informational piece only. The contents are presented with no warranty expressed or implied by AG Bell, the presenters or the person(s) making it available as a resource. No legal responsibility is assumed for the accuracy of the information contained herein or the outcome of decisions, contracts, commitments, or obligations made on the basis of this information. This document may not be reproduced without permission. For additional copies of this publication, contact AG Bell at editor@agbell.org.

Funding for the conference was made possible (in part) by grant number R13DC010951 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health. The views expressed in written conference materials or publications and by speakers and moderators do not necessarily reflect the official policies of the Department of Health and Human Services; nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government. © 2014 Alexander Graham Bell Association for the Deaf and Hard of Hearing Alexander Graham Bell Association for the Deaf and Hard of Hearing 3417 Volta Place, NW Washington, DC 20007 (202) 337-5220 ListeningandSpokenLanguage.org


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

2

2014 RESEARCH SYMPOSIUM

Improving Audibility: The Foundation for Speech Understanding PAMELA SOUZA, PH.D. NORTHWESTERN UNIVERSITY, SCHOOL OF COMMUNICATION Every conversation has two parts—the talker and the listener—and both have important roles. Speech audibility is the essential component to hearing well. Although audibility may not guarantee good speech understanding, poor audibility always compromises speech understanding regardless of hearing ability. Many aspects of audibility can be maximized in order to maximize speech understanding for individuals with hearing loss. In order to understand the role of audibility, the acoustic information conveyed by the talker is an important consideration. Anyone who has a communication partner with hearing loss recognizes that hearing loss causes communication difficulties and challenges. The individual with hearing loss may miss portions of the conversation, confuse one word with another or be slow to recognize that s/he is being spoken to and focus attention on the speaker. It is common to attribute all of these problems to the hearing loss. But every conversation has two parts—the talker and the listener—and both have important roles. To understand the role of audibility, we first consider the acoustic information conveyed by the talker. Speech is a marvelously complex signal that begins with vibration of the vocal folds and is shaped by the position of the lips and tongue (the fastest-moving muscle in the human body). The result is a sequence of sound pressure change that varies from shortduration sounds to long-duration sounds; from high frequency (pitch) to low frequency; and from high intensity (louder) sound to lower intensity (softer) sound. Figure 1 shows a spectrogram, a visual representation of how sound varies in frequency and loudness. Darker areas show more intense sound. Even a single speaker who maintains a constant vocal level and speaking rate will produce sound that varies in frequency (pitch) by about 5000 Hz, and varies in loudness by about 30 dB. In general, consonants are more difficult to hear because they are usually of lower intensity and shorter duration than vowels. The consonant “s” produced by a child, for example, may be as high as 7000 Hz (Pittman, Stelmachowicz, Lewis, & Hoover, 2003) while the vowel “oo” produced by an adult can be as low as 200 Hz (Bor, Souza, & Wright, 2008). When we consider that we produce about 160 words a minute (Yuan, Liberman, & Cieri, 2006) and that our voices can range in level from a whisper to a shout, it is clear that we need to consider communication in the context of a rich soundscape. Fortunately our ears (and brains) are specialized for listening. In fact, in a normally functioning cochlea the outer hair cells work as an amplifier to enhance the signal produced by soft speech. In a quiet situation with a clear speaker and a listener with sensitive hearing, audibility is high and listening is automatic and effortless.

SPEECH AUDIBILITY AND THE AUDIOGRAM Now consider the effect of hearing loss. The threshold of hearing is reported on an audiogram as “decibels hearing level,” or dB HL at each frequency. Only the sounds that are audible (that are at a level above the listener’s hearing threshold) will be heard. In this scheme, suppose we have a typical listener who has sensitive hearing and a hearing threshold of 20 dB HL. Larger values reflect poorer thresholds. This is usually due to loss of cochlear hair cells and the resulting failure of the auditory system to enhance soft sounds (via the outer hair cells) or even to transmit any sound (via the inner hair cells) onward along the neural pathways to the auditory cortex. When the listener and speaker face one another at a convenient distance (let’s assume this is about 3-4 feet), and the speaker talks in a conversational-level voice, that speech might reach the listener’s ear at a level of 45 dB HL. When only some frequencies are audible, the end result is not that nothing is heard, but that speech sounds garbled or unclear. Figure 2 illustrates this concept for a listener with high-frequency hearing loss presented with conversational-level speech. More sounds will be inaudible if the talker is 1.051995

0.9453

0.2109 0

-0.8203 5000 Hz

5000 Hz

0Hz

1.051995 0

104.8Hz 75Hz

1.051995 Visible part 2.103991 seconds Total duration 2.103991 seconds

2.103991

Figure 1 The top graph shows how the speech waveform varies in level over time. The individual striations indicate vibrations of the vocal folds. The lower graph shows a spectrogram, with time from left to right and frequency from top to bottom. Darker colors show higher-intensity parts of speech.


2014 RESEARCH SYMPOSIUM

Figure 2 This audiogram illustrates how hearing loss can cause some sounds (sounds falling above the solid dark line) to be inaudible. When only some speech sounds are audible, speech may sound distorted or unclear. Noise also reduces sound audibility. In this example, only the sounds falling below the noise line and the audiogram line will be heard.

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

0

Hearing Threshold Level (decibels)

3

Normal hearing

10 20

z v

30 40

j m

50 60

p

d b i n L o a e u

r

h g ch sh

f

k

th s h

Noise limits audibility

T

70 80

Hearing loss limits audibility

90 100 110 125

250

500

1000

2000

4000

8000

Frequency in Hz

speaking more softly (imagine the speech sounds moving upward on the graph) or if the listener has more hearing loss (imagine the thick solid line moving downward on the graph). In both cases, speech understanding would decrease. REAL LISTENING, DISTANCE, AND BACKGROUND NOISE So far, we’ve considered the variation in speech levels and the effects of hearing loss on speech audibility. But that assumes a talker in a quiet room at a reasonable distance from the listener. Real-life conversations don’t always occur in this way. How many times have you asked someone to do something while turned away from her/him or while speaking from another room? Speech energy decreases by about 6 dB each time the distance between the talker and listener is doubled. A conversation that would be 45 dB HL at a distance of 3 feet drops to 39 dB when the talker is 6 feet away, and to 33 dB when the talker is 12 feet away. Objects between the talker and listener may block sound energy. When the talker is distant from the listener, the signal the listener hears may be dominated by reverberation (sound reflections off hard walls and floors) which overwhelms the direct sound energy from the talker. Still, listening to speech in a quiet situation presents us with a relatively straightforward situation. The speech audibility depends on the level of the talker’s voice, the level of the listener’s hearing, the distance between them, and any intervening objects or reflections that interfere with the speech. The most common complaint from hearing aid wearers is not speech in quiet listening environments, but difficulty listening in background noise (Kochkin, 2009). Noise comes in many forms. Noise that is distinct in frequency and timing from speech—such as a high-frequency fan or low-frequency motor noise—is relatively easy to deal with. Because there is less frequency overlap with speech, our ears and brains are quite good at sorting out and ignoring those types of noise. Noise that overlaps in frequency and timing patterns with the target—such as other talkers in the background—presents a much greater challenge. Typically there is some overlap in frequency between the foreground and

background talkers, energetically masking the speech the listener wishes to hear. Energetic masking reduces audibility because the speech is partially obscured by the noise. To hear better, we must increase the level of the speech, or decrease the level of the noise. When there is background noise, speech audibility depends on the level of the talker’s voice, the level of the listener’s hearing, the distance between them, any intervening objects or reflections, and the amount and type of background noise. If the level of the background noise drops—as when the background talker pauses between words or sentences—energetic masking diminishes briefly and the listener can “glimpse” a bit of the target speech (Rosen, Souza, Ekelund, & Majeed, 2013). For reasons not fully understood, many listeners with hearing loss have trouble glimpsing speech (Bernstein & Grant, 2009)—either they are not able to extract audible segments of the target speech during moments of lower background noise levels, or are not able to assemble those disconnected glimpses into a meaningful stream of information. For that reason, a complete audiometric exam should include tests of speech understanding in background noise as well as under audible (quiet) conditions. When the foreground and background speech does not overlap in time and frequency, we might expect that audibility will be high and speech understanding will be easier. However, even when energetic masking is low (and audibility should be high), background speech can cause informational masking when the listener can’t distinguish between two streams of meaningful information (Brungart, 2001). Most everyday environments include noise that causes a combination of informational and energetic masking. As the number of talkers in the background increases, more energetic masking will occur (although informational masking may decrease slightly because the background is now a mass of sound that has a different percept than the foreground speech) (Rosen et al., 2013). In situations involving energetic and/or informational masking, the listener must expend more effort to piece together a deficient speech signal and obtain its meaning (Rönnberg et al., 2013). This is one reason why many individuals with hearing loss report end-of-day fatigue (Hornsby, 2013 ; Hornsby, Werfel, Camarata, & Bess, 2013).


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

HEARING AIDS AND SPEECH AUDIBILITY Hearing aids are expected to improve communication in cases of permanent or untreatable hearing loss. But the primary function of hearing aids is to improve speech audibility. Hearing aids apply gain that varies according to the incoming sound level. The greatest gain is applied to softest inputs and least to loud inputs. Modern hearing aids also vary gain across frequency, essentially mirroring the audiogram to make sound both audible and comfortable. A maximum output is programmed into the hearing aid to prevent sound becoming uncomfortably loud. Because sound is constantly varying in level and frequency, the hearing aid must also monitor and adjust its gain over time. The end goal is to amplify speech in such a way that soft sounds are audible and loud sounds are below the wearer’s loudness discomfort level. Such multichannel compression hearing aids have been shown to provide better loudness comfort and speech understanding in situations where the level of speech varies from soft to loud (Souza, 2003). Because multichannel compression aids are self-adjusting, they reduce or eliminate the need for the listener to adjust volume. This is essential for children who cannot adjust their own volume control (Jenstad, Seewald, Cornelisse, & Shantz, 1999) and convenient for everyone else. Will improving audibility lead to improved speech understanding? The answer is yes, but with some qualifications. Certainly, audibility is the foundation of speech understanding. Research studies which quantify audibility show that for a 10% improvement in audibility, we can expect a 20% improvement in speech understanding (McCreery & Stelmachowicz, 2011; Souza, Boike, Witherell, & Tremblay, 2007). Some listeners may require even higher audibility to achieve good speech understanding. In particular, children seem to need greater speech audibility compared to adults (McCreery & Stelmachowicz, 2011), and that factor is incorporated in some pediatric hearing aid fitting procedures (Scollie et al., 2005). Two issues may modify this relationship and prevent full speech understanding, even when audibility is high. The first issue is that predictions of the relationship between audibility and understanding are based on averages across large groups of people. Many listeners with hearing loss cannot take full advantage of improved audibility because the frequency and timing aspects of audible sound are not clearly transmitted by their auditory system. Most often this occurs when outer hair cell damage results in broader auditory filters. In that case, the system transmits sounds that are actually different in frequency as though they have the same frequency information. The cochlear filter is basically a sorting device, and things that aren’t properly sorted can’t be distinguished from each other. Picture an old-fashioned coin counter, where nickels, dimes, and quarters are each supposed to fall within their own compartment according to size but where the dimes compartment is too large and some nickels get mixed in. We will no longer know how many dimes we have or how many nickels; nor will we know how much money we have

2014 RESEARCH SYMPOSIUM

altogether. In auditory terms, poor frequency selectivity means that two sounds that are similar in pitch (for example, “p” and “t”) may produce identical representations and patterns of stimulation within the cochlea. The words “pop” and “top” will sound the same, and speech understanding will be reduced even in situations where audibility is good. A less common cause of poor transmission of pitch or timing information through the cochlea is an area of sparse or missing inner hair cells (Moore, Huss, Vickers, Glasberg, & Alcantara, 2000). Because hair cells are specialized to receive particular frequencies, missing hair cells create a situation that is like transmitting a radio signal on a particular frequency and trying to pick it up with a receiver tuned to a different frequency. You may get some signal, but it will be garbled. Even the highest-quality hearing aid produces amplified sound that needs to pass through a damaged auditory system, where some distortion can be added. That means that the benefit of any hearing aid is a combination of the device itself and the listener’s auditory abilities. The second issue related to using hearing aids to improve audibility is the challenge of background noise. When masking noise limits audibility, we would ideally amplify the speech without amplifying noise. Hearing aids do their best in this regard. Directional microphones can be used to raise the level of sound from the front above the level of the surrounding noise. Digital noise reduction attempts to “recognize” noise (based on noise being distinct from speech in frequency or modulation pattern) and digitally suppress it. But when two people are speaking at the same time at a similar volume, and a listener with hearing aid(s) is trying to hear one over the other, the hearing aid has no way to determine which person the listener is trying to hear. Consequently, both speakers’ voices are amplified together, and audibility of the voice of interest does not improve. In those cases, we can improve audibility by overcoming distance—giving the talker a remote microphone. The signal from that microphone can be transmitted directly to the listener’s hearing aid, which results in improved speech audibility. IMPROVING AUDIBILITY THROUGH GOOD COMMUNICATION Speech audibility is the essential component to hearing well. Although audibility may not guarantee good speech understanding, poor audibility always degrades speech understanding. Some aspects of audibility—like the amount of hearing loss—are out of our control, but many aspects of it are amenable to adjustment. Audibility can be maximized if: • The talker faces the listener and speaks clearly. As a talker, it’s not necessary to shout, but avoid mumbling, looking away, or placing objects in front of your mouth. That impedes sound transmission and also makes it more difficult to use visual cues to speech. • The listener is close to the talker. This prevents the fall-off of

4


5

2014 RESEARCH SYMPOSIUM

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

speech levels due to distance and minimizes reverberation. • Background noise is eliminated or reduced whenever possible. • Appropriately fitted hearing aids are used. When there is hearing loss in both ears, binaural hearing aids provide the best audibility. Audibility should be assessed at the time of the hearing aid fitting by completing a probe microphone test (sometimes called a “real ear” test), or an equivalent “real ear to coupler” test which uses a conversion to predict the response in the ear from measurements made outside of the ear using special test equipment (Munro & Hatton, 2000). This test ensures the levels of sound that the listener is receiving through the hearing aid are appropriate. If speech is not sufficiently audible or is uncomfortable, directed adjustments can be made. A recent survey (Kochkin, 2011) found that patients who received probe microphone testing had better hearing aid outcomes and greater satisfaction than those who didn’t. • The audibility of the speech relative to background noise is improved by using directional hearing aids and appropriate positioning. Listeners should try to sit or stand so the noise sources are behind them, and they are facing the talker. In very noisy situations, consider a remote microphone which transmits the talker’s voice to the listener. Such systems are very helpful in poor acoustic environments, including restaurants and classrooms (Anderson & Goldstein, 2004; Thibodeau, 2010). ACKNOWLEDGMENTS The author’s research is supported by the National Institutes of Health (R01 DC60014, R01 DC12289). REFERENCES

Anderson, K. L., & Goldstein, H. (2004). Speech perception benefits of FM and infrared devices to children with hearing aids in a typical classroom. Language, Speech and Hearing Services in Schools, 35(2), 169-184. Bernstein, J. G., & Grant, K. W. (2009). Auditory and auditory-visual intelligibility of speech in fluctuating maskers for normal-hearing and hearing-impaired listeners. Journal of the Acoustical Society of America, 125(5), 3358-3372. Bor, S., Souza, P., & Wright, R. (2008). Multichannel compression: Effects of reduced spectral contrast on vowel identification. Journal of Speech, Language, and Hearing Research, 51(5), 1315-1327. Brungart, D. S. (2001). Informational and energetic masking effects in

the perception of two simultaneous talkers. Journal of the Acoustical Society of America, 109(3), 1101-1109. Hornsby, B. W. (2013). The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear and Hearing, 34(5), 523-534. Hornsby, B. W., Werfel, K., Camarata, S., & Bess, F. H. (2013). Subjective fatigue in children with hearing loss: Some preliminary findings. American Journal of Audiology. Epub ahead of print. doi: 10.1044/10590889(2013/13-0017 Jenstad, L. M., Seewald, R., Cornelisse, L., & Shantz, J. (1999). Comparison of linear gain and wide dynamic range compression hearing aid circuits: Aided speech perception measures. Ear and Hearing, 20(2), 117-126. Kochkin, S. (2009). MarkeTrak VIII: 25-year trends in the hearing health market. Hearing Review, 16(11), 12-31. Kochkin, S. (2011). MarkeTrak VII: Patients report improved quality of life with hearing aid usage. Hearing Journal, 64(6), 25-32. McCreery, R. W., & Stelmachowicz, P. G. (2011). Audibility-based predictions of speech recognition for children and adults with normal hearing. Journal of the Acoustical Society of America, 130(6), 4070-4081. Moore, B. C., Huss, M., Vickers, D. A., Glasberg, B. R., & Alcantara, J. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology, 34(4), 205-224. Munro, K. J., & Hatton, N. (2000). Customized acoustic transform functions and their accuracy at predicting real-ear hearing aid performance. Ear and Hearing, 21(4), 59-69. Pittman, A. L., Stelmachowicz, P. G., Lewis, D. E., & Hoover, B. M. (2003). Spectral characteristics of speech at the ear: Implications for amplification in children. Journal of Speech, Language, and Hearing Research, 46(3), 649-657. Rönnberg, J., Lunner, T., Zekveld, A., Sörqvist, P., Danielsson, H., Lyxell, B., et al. (2013). The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Frontiers in Systems Neuroscience, 7, 31. doi: 10.3389/fnsys.2013.00031 Rosen, S., Souza, P., Ekelund, C., & Majeed, A. (2013). Listening to speech in a background of other talkers: Effects of talker number and noise vocoding. Journal of the Acoustical Society of America, 133(4), 2431-2443. Scollie, S., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., Laurnagaray, D., et al. (2005). The Desired Sensation Level multistage input/output algorithm. Trends in Amplification, 9(4), 159-197. Souza, P. (2003). Effects of compression on speech acoustics, intelligibility and speech quality. Trends in Amplification, 6, 131-165. Souza, P., Boike, K. T., Witherell, K., & Tremblay, K. (2007). Prediction of speech recognition from audibility in older listeners with hearing loss: effects of age, amplification, and background noise. Journal of the American Academy of Audiology, 18(1), 54-65. Thibodeau, L. (2010). Benefits of adaptive FM systems on speech recognition in noise for listeners who use hearing aids. American Journal of Audiology, 19(1), 36-45. Yuan, J., Liberman, M., & Cieri, C. (2006). Towards an integrated understanding of speaking rate in conversations. Paper presented at International Conference on Spoken Language Processing, Pittsburgh, PA.

Pamela Souza, Ph.D., is a professor and director of the Hearing Aid Laboratory at Northwestern University. She received her B.S. from the University of Massachusetts at Amherst and her M.S. and Ph.D. in Audiology from Syracuse University. Throughout her career she has combined research and teaching with clinical practice, and has worked with patients ranging from infants to older adults. Her research interests include factors which improve or degrade speech understanding; how different types of signal processing used in hearing aids interact with listener age and cognitive status; and how research in these areas can improve communication and direct clinical practice. Souza is a Fellow of the American Speech-Language-Hearing Association. Her research is supported by the National Institute on Deafness and Other Communication Disorders, National Institutes of Health.


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

2014 RESEARCH SYMPOSIUM

Improving Auditory Skills through Training BEVERLY WRIGHT, PH.D. NORTHWESTERN UNIVERSITY, SCHOOL OF COMMUNICATION Hearing abilities improve with practice, showing that the auditory system is not rigid but rather can be changed through experience. Recent research has provided new insights into the factors that facilitate such learning. Knowledge of these factors will lead to more effective training strategies to help restore auditory abilities in people with hearing loss as well as to enhance auditory skills in people with typical hearing.

My colleagues and I investigate the circumstances that enable perceptual learning in hearing. We seek a better understanding of the learning process, because that would lead to the development of more effective and efficient auditory training regimens. While we are ultimately interested in improvement in practical skills, such as understanding speech, our experiments focus instead on learning on quite basic hearing abilities, such as the ability to distinguish which of two tones has a higher pitch or is longer in duration. Because these abilities are so basic, we can concentrate on the learning process itself. We think this can be a fruitful approach because we have data suggesting that key aspects of this process that were first identified through examinations of learning on basic auditory abilities apply to more complex auditory cases and even to learning in other modalities. Here we outline four tentative principles of auditory perceptual learning and then briefly discuss what those principles imply about the learning process and how best to implement auditory training regimens. TENTATIVE PRINCIPLES OF AUDITORY PERCEPTUAL LEARNING Just Do It If the goal is to improve a particular auditory skill, it is generally necessary to practice that skill; mere exposure to the relevant sounds is not enough. For example, listeners who practiced making judgments about which of two pairs of tones had a lower frequency (pitch) (Figure 1A) showed clear improvements in that

A

Frequency Discrimination Comparison

Frequency

Standard

Df Time

B

Duration Discrimination Standard

Frequency

INTRODUCTION It is often thought that our senses cannot be modified. For example, that we see only as well as our eyes, or our glasses plus our eyes, manage to bring an image into focus on our retinas, or that we hear only as well as our ears, or our hearing aids plus our ears, manage to stimulate the cochlea appropriately. However, this is not the case. Perceptual abilities in all of the senses can be improved through practice (Peron & Allen, 1988; Royet, Plailly, Saive, Veyrac, & DelonMartin, 2013; Sagi, 2011; Wong, Peters, & Goldreich, 2013; Wright & Zhang, 2009). This learning, called perceptual learning, has considerable practical value for people with hearing loss, because it provides a means to moderate the impact of that loss. Perceptual training can help to make the most of what remains.

t

Comparison

t

Dt

Time Figure 1 Schematic diagrams of the frequency-discrimination (A) and duration-discrimination (B) tasks. For both tasks, on each trial, listeners were presented with two brief tones during each of two observation periods. A standard sound was presented in one observation period and a comparison sound in the other, with the order of the two selected randomly. The standard sound was the same for both tasks (two 15-ms 1-kHz tones separated by 100 ms), but the comparison sound had a lower frequency in the frequency task and a longer duration in the duration task. The listener selected the comparison sound (lower frequency or longer interval). [Figure from (Wright et al., 2010).]

ability—they could hear smaller differences in frequency between the two tone pairs after training (Figure 2B) (Wright & Sabin, 2007; Wright, Sabin, Zhang, Marrone, & Fitzgerald, 2010). Yet, after that learning, those same listeners were no better at judging which of two pairs of tones were separated by a longer duration (Figure 1B), even though one of the tone pairs was the same one that they had heard throughout the training on the frequency-discrimination task (data not shown). Thus, the practice on the frequency task did not aid performance on the duration-discrimination task. Likewise listeners who practiced the duration task improved on that task (data not shown) (Wright & Sabin, 2007), but after that learning were no better at the frequency task (Figure 2C) (Wright et al., 2010) than listeners who participated in the pre-training and post-training tests but received no training in between (Figure 2A)

6


2014 RESEARCH SYMPOSIUM

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

(Wright et al., 2010). This lack of generalization from the frequency to the duration task or vice versa suggests that auditory perceptual learning requires practice of the skill to be learned. There are other similar demonstrations in auditory learning (Demany & Semal, 2002; Fitzgerald & Wright, 2011; Mossbridge, Fitzgerald, O’Connor, & Wright, 2006; van Wassenhove & Nagarajan, 2007), and numerous examples in visual learning (Ahissar & Hochstein, 1993; Crist, Kapadia, Westheimer, & Gilbert, 1997; Fahle, 1997; Karni & Sagi, 1991; Levi & Polat, 1996; Meinhardt, 2002; Shiu & Pashler, 1992). Practice, Practice, Practice It also appears that for perceptual improvements to last or even increase across multiple days requires sufficient training each day. The initial evidence for this requirement in auditory learning arose from the observation that listeners who practiced discriminating between the frequencies of two tones 360 times (trials) per day for approximately 6 days did not improve on that task (Figure 2D), while listeners who practiced for 900 trials per day did improve (Figure 2B) (Wright & Sabin, 2007; Wright et al., 2010). The outcome was the same even when the analyses were restricted to include the same total number of training trials for each group, indicating that the key factor was the amount of training per day, not the total training amount. The reliance of perceptual improvement on sufficient training per day has also been reported for visual tasks in which viewers made judgments about the orientation of a chevron (Aberg, Tartaglia, & Herzog, 2009) or whether the number of letters in a string of letters was odd or even (Hauptmann & Karni, 2002). Unfortunately, there is no single ‘magic number’ of daily training trials. Rather, the amount of daily training required differs for different tasks. worse Frequency Discrimination Threshold (in Hz)

7

A

For example, practicing 360 trials per day for approximately 6 days yielded no improvement on a particular frequency-discrimination task, but resulted in clear learning on a duration-discrimination task, even though the standard tone, against which the tones of different frequency or duration were compared, was the same for both tasks (Wright & Sabin, 2007). The required amount of daily training may also differ for different stimuli even when the task is the same. For instance, listeners who practiced discriminating a 1-kHz tone from tones of other frequencies improved on that task with 360 training trials per day when the duration of the tones was long (about one third of a second) (Roth, Amir, Alaluf, Buchsenspanner, & Kishon-Rabin, 2003), but not when it was short (about one thirtieth of a second) (Wright & Sabin, 2007). Further, there is currently no known simple way to determine the required amount of daily training from how performance changes during the training session itself, because performance can improve across days even if it remains constant (Wright & Sabin, 2007) or actually worsens (Huyck & Wright, 2011, 2013) within each training session. Enough Is Enough While it appears that sufficient training per day is required for perceptual improvement across days, additional training beyond that amount can be superfluous. For instance, the learning curves documenting improvement in performance over days of training on a duration-discrimination task were remarkably similar for listeners who practiced 360 trials per day and those who practiced 900 trials per day (Wright & Sabin, 2007). Similar results have been reported for other tasks such as judging the position of a sound source (Ortiz & Wright, 2010) or the orientation of a chevron (Aberg et al., 2009),

B

C

D

E

F

All Frequency

All Duration

Frequency + Silence

Frequency + Sound

Frequency + Duration

20 15

Pre

10

better

Post

No Training

Figure 2 Mean frequency-discrimination thresholds (frequency difference in Hz for 79.4% correct) before (open squares) and after (filled squares) either no training [A: No Training; n = 10] or completing one of five multiple-day training regimens [B-F: n = 6–8 per trained group]. Schematic diagrams of each regimen are shown along the x axis. In two regimens all of the practice was on either a target frequency-discrimination task [B: All Frequency] or a non-target duration-discrimination task [C: All Duration]. In three other regimens, in each session, practice on the target frequency-discrimination

task alternated with performance of a written symbol-to-number matching task in silence [D: Frequency + Silence], the written task while the training sounds were played in the background [E: Frequency + Sound], or the duration-discrimination task [F: Frequency + Duration]. Error bars indicate +/- 1 standard error of the mean. Dashed boxes indicate significantly greater improvement between the pre- and post-training tests by the trained group than by the no-training group (p < 0.05). [Data from (Wright & Sabin, 2007; Wright et al., 2010); figure modified from (Wright et al., 2010).]


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

and pressing keys on a keyboard in a particular pattern (SavionLemieux & Penhune, 2005). Thus, more training per day does not necessarily increase the amount of learning across days. Two Wrongs that Make a Right So far, we have suggested that improvements in auditory skills across days require practice of the task to be learned for a sufficient number of training trials each day. These requirements can place a considerable burden on the learner, particularly when the sufficient number of daily training trials is large. Therefore, we recently asked whether a portion of the training period could be replaced with exposures to the training sounds without practice of the task to be learned. We trained listeners on a frequencydiscrimination task for an amount of training per day that did not yield learning across days (Figure 2D, see previous page), but each day alternated that small amount of daily training with additional exposures to the training sounds. The additional exposures were either presented in the background while the listeners performed a written task (Figure 2E), or were encountered while performing a duration-discrimination task (Figure 2F). Training on the duration task alone did not aid performance on the frequency task (Figure 2C, see previous page), indicating that improvement required performance of the task to be learned. Thus, these alternating training regimens switched between two experiences neither of which in isolation led listeners to learn on the frequency task. Nevertheless, both groups of listeners clearly improved on the frequency task (Wright et al., 2010). Though practice for a sufficient number of trials per day was required for learning on this task, that practice was not required throughout the entire training period; a portion of the practice could be replaced with the training sounds either presented in the background or as part of a different task. We subsequently established (Wright et al., 2010) that the additional exposures to the training sounds aided learning regardless of whether they were presented before or after the period of practice on the task to be learned, but that the exposures needed to be presented within about 15 minutes of the practice period. The exposures also needed to share a relevant characteristic with the training sounds (in this case, share the same frequency) but could differ in another characteristic (in this case, the sound duration). We also have preliminary data indicating that the combination of task practice and additional exposures to the training sounds aids learning on speech tasks as well as in other modalities. IMPLICATIONS Each of these tentative principles gives us insight into the brain processes that lead to auditory learning and thus into how to optimize auditory training regimens. The need to practice the task to be learned (just do it) suggests that practice somehow highlights the neural circuitry that is involved in performance of the practiced task, thereby placing that

2014 RESEARCH SYMPOSIUM

circuitry into a state in which it can be modified (Seitz & Dinse, 2007; Wright & Zhang, 2009). The highlighting may be linked to attention (Ahissar & Hochstein, 2004; Byers & Serences, 2012), among other possibilities (Roelfsema, van Ooyen, & Watanabe, 2010), because it is necessary to attend to different aspects of sounds when performing different auditory tasks. The general idea is that rather than change in response to every experience, the brain itself exerts some control over whether and where a change will occur. At a practical level, this means that auditory training regimens that rely simply on sound exposure, without specific practice with those sounds, are unlikely to generate lasting improvement. For example, it suggests that a listener with hearing loss who receives a new hearing aid or cochlear implant would benefit far more from structured hearing practice with that device than from trying to ‘figure out’ how to use the device simply by wearing it in everyday life. The need for sufficient training per day (practice, practice, practice) but no more (enough is enough) suggests that the neural circuitry that has been highlighted must be engaged repeatedly in order to accumulate training experiences until some learning threshold is reached. This threshold appears to mark a distinct transition from the period of training (acquisition) to a following period of hours to days during which what has been learned is transferred from an easily disrupted, short-term state to a more stable, long-term state (consolidation) (Dudai, 2004; McGaugh, 2000). Thus it seems that both the brain and the world have some control over what will be learned. Even if the brain has given the ‘ok’ for learning by highlighting the appropriate neural circuitry, the learning will only occur if the world provides enough exposures to relevant sounds. Determining how much training per day is required to cross the learning threshold for a given skill could improve auditory training regimens. Training too little per day will be ineffective, because no lasting learning will occur. Training beyond the required amount will be inefficient, because no additional learning will accrue. Finally, the learning benefits obtained by combining periods of practice on the task to be learned with periods of exposure to the training sounds without practice on that task (two wrongs that make a right) indicate that the influences of these two experiences can extend beyond the times when each is occurring. It is as though the fuel for learning comes from a sufficient number of exposures to the training sounds, but the spark comes from practice with those sounds, even for just a portion of the time. Training regimens that include this practice-plus-exposure combination could reduce the amount of practice needed to learn on some auditory tasks. SUMMARY Our current view is that for learning to occur on auditory tasks typically requires sufficient practice per day, but no more, of the task to

8


9

2014 RESEARCH SYMPOSIUM

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

be learned. This requirement for sometimes extensive daily practice can be moderated by combining periods of task practice with periods of exposure to the training sounds without practice. A greater understanding of the basic principles of auditory learning will lead to more effective and efficient perceptual training regimens aimed at optimizing hearing abilities in people with hearing loss. REFERENCES

Aberg, K. C., Tartaglia, E. M., & Herzog, M. H. (2009). Perceptual learning with Chevrons requires a minimal number of trials, transfers to untrained directions, but does not require sleep. Vision Research, 49(16), 2087-2094. doi: 10.1016/j.visres.2009.05.020 Ahissar, M., & Hochstein, S. (1993). Attentional control of early perceptual learning. Proceedings of the National Academy of Sciences of the United States of America, 90(12), 5718-5722. Ahissar, M., & Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Science, 8(10), 457-464. doi: 10.1016/j.tics.2004.08.011 Byers, A., & Serences, J. T. (2012). Exploring the relationship between perceptual learning and top-down attentional control. Vision Research, 74, 30-39. doi: 10.1016/j.visres.2012.07.008 Crist, R. E., Kapadia, M. K., Westheimer, G., & Gilbert, C. D. (1997). Perceptual learning of spatial localization: specificity for orientation, position, and context. Journal of Neurophysiology, 78(6), 2889-2894. Demany, L., & Semal, C. (2002). Learning to perceive pitch differences. Journal of the Acoustical Society of America, 111(3), 1377-1388. Dudai, Y. (2004). The neurobiology of consolidations, or, how stable is the engram? Annual Review of Psychology, 55, 51-86. doi: 10.1146/annurev. psych.55.090902.142050 Fahle, M. (1997). Specificity of learning curvature, orientation, and vernier discriminations. Vision Research, 37(14), 1885-1895. Fitzgerald, M. B., & Wright, B. A. (2011). Perceptual learning and generalization resulting from training on an auditory amplitude-modulation detection task. Journal of the Acoustical Society of America, 129(2), 898-906. doi: 10.1121/1.3531841 Hauptmann, B., & Karni, A. (2002). From primed to learn: the saturation of repetition priming and the induction of long-term memory. Brain Research Cognitive Brain Research, 13(3), 313-322. Huyck, J. J., & Wright, B. A. (2011). Late maturation of auditory perceptual learning. Developmental Science, 14(3), 614-621. doi: 10.1111/j.14677687.2010.01009.x Huyck, J. J., & Wright, B. A. (2013). Learning, worsening, and generalization in response to auditory perceptual training during adolescence. Journal of the Acoustical Society of America, 134(2), 1172-1182. doi: 10.1121/1.4812258 Karni, A., & Sagi, D. (1991). Where practice makes perfect in texture discrimination: evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences of the United States of America, 88(11), 4966-4970. Levi, D. M., & Polat, U. (1996). Neural plasticity in adults with amblyopia. Proceedings of the National Academy of Sciences of the United States of America, 93(13), 6830-6834.

McGaugh, J. L. (2000). Memory—a century of consolidation. Science, 287(5451), 248-251. Meinhardt, G. (2002). Learning to discriminate simple sinusoidal gratings is task specific. Psychological Research, 66(2), 143-156. Mossbridge, J. A., Fitzgerald, M. B., O’Connor, E. S., & Wright, B. A. (2006). Perceptual-learning evidence for separate processing of asynchrony and order tasks. Journal of Neuroscience, 26(49), 12708-12716. doi: 10.1523/jneurosci.2254-06.2006 Ortiz, J. A., & Wright, B. A. (2010). Differential rates of consolidation of conceptual and stimulus learning following training on an auditory skill. Experimental Brain Research, 201(3), 441-451. doi: 10.1007/s00221-0092053-5 Peron, R. M., & Allen, G. L. (1988). Attempts to train novices for beer flavor discrimination: a matter of taste. Journal of General Psychology, 115(4), 403-418. doi: 10.1080/00221309.1988.9710577 Roelfsema, P. R., van Ooyen, A., & Watanabe, T. (2010). Perceptual learning rules based on reinforcers and attention. Trends in Cognitive Sciences, 14(2), 64-71. doi: 10.1016/j.tics.2009.11.005 Roth, D. A., Amir, O., Alaluf, L., Buchsenspanner, S., & Kishon-Rabin, L. (2003). The effect of training on frequency discrimination: generalization to untrained frequencies and to the untrained ear. Journal of Basic and Clinical Physiology and Pharmacology, 14(2), 137-150. Royet, J. P., Plailly, J., Saive, A. L., Veyrac, A., & Delon-Martin, C. (2013). The impact of expertise in olfaction. Frontiers in Psychology, 4, 928. doi: 10.3389/fpsyg.2013.00928 Sagi, D. (2011). Perceptual learning in Vision Research. Vision Research, 51(13), 1552-1566. doi: 10.1016/j.visres.2010.10.019 Savion-Lemieux, T., & Penhune, V. B. (2005). The effects of practice and delay on motor skill learning and retention. Experimental Brain Research, 161(4), 423-431. doi: 10.1007/s00221-004-2085-9 Seitz, A. R., & Dinse, H. R. (2007). A common framework for perceptual learning. Current Opinion in Neurobiology, 17(2), 148-153. doi: 10.1016/j. conb.2007.02.004 Shiu, L. P., & Pashler, H. (1992). Improvement in line orientation discrimination is retinally local but dependent on cognitive set. Perception and Psychophysics, 52(5), 582-588. van Wassenhove, V., & Nagarajan, S. S. (2007). Auditory cortical plasticity in learning to discriminate modulation rate. Journal of Neuroscience, 27(10), 2663-2672. doi: 10.1523/jneurosci.4844-06.2007 Wong, M., Peters, R. M., & Goldreich, D. (2013). A physical constraint on perceptual learning: tactile spatial acuity improves with training to a limit set by finger size. Journal of Neuroscience, 33(22), 9345-9352. doi: 10.1523/ jneurosci.0514-13.2013 Wright, B. A., & Sabin, A. T. (2007). Perceptual learning: how much daily training is enough? Experimental Brain Research, 180(4), 727-736. doi: 10.1007/s00221-007-0898-z Wright, B. A., Sabin, A. T., Zhang, Y., Marrone, N., & Fitzgerald, M. B. (2010). Enhancing perceptual learning by combining practice with periods of additional sensory stimulation. Journal of Neuroscience, 30(38), 1286812877. doi: 10.1523/jneurosci.0487-10.2010 Wright, B. A., & Zhang, Y. (2009). Insights into sound processing gained from perceptual learning. In M. Gazzaniga (Ed.), The Cognitive Neurosciences (4th ed., pp. 353-366). Cambridge, MA: MIT Press.

Beverly A. Wright, Ph.D., is a professor in the Department of Communication Sciences and Disorders and director of the Knowles Hearing Center at Northwestern University. She has a B.A. in English and linguistics from Indiana University and a Ph.D. in experimental psychology from the University of Texas at Austin. She did postdoctoral research at the University of Florida and the University of California San Francisco before joining the faculty at Northwestern University in 1997. Her primary research interests are perceptual learning, language-based learning problems, and auditory psychophysics.


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

2014 RESEARCH SYMPOSIUM

Spoken Language Development in Children Receiving Cochlear Implants EMILY TOBEY, PH.D. UNIVERSITY OF TEXAS AT DALLAS, SCHOOL OF BEHAVIORAL AND BRAIN SCIENCES Spoken language skills remain a challenge for many children experiencing severe to profound hearing losses. Intervention programs incorporating early detection of hearing loss, early access to hearing aids, and early access to advanced digital technologies such as cochlear implants appear to promote spoken language skills that approach levels of performance associated with peers with typical hearing. This presentation will review how age of implantation and residual hearing influence spoken language development in children using multichannel cochlear implants. Participants consisted of 160 children who received cochlear implants in North America—98 children were implanted before age 2.5 years and 62 children were implanted between age 2.5 and 5 years. Bilateral cochlear implantation occurred in 24% and 16% of the younger and older implanted groups, respectively. Sixty-nine percent of the younger group and 37% of the older group experienced congenital hearing losses. Children were assessed at three time points—at 4, 5, and 6 years after cochlear implantation. Language was assessed using four subtests of the Comprehensive Assessment of Spoken Language (CASL). Children who received cochlear implants under 2.5 years of age achieved higher standard scores than children with older ages of implantation for expressive vocabulary, expressive syntax, and pragmatic judgments. However, in both groups, some children performed more than two standard deviations below the standardization group mean, while some scored at or well above the mean. Younger ages of cochlear implantation were associated with higher levels of performance while later ages of cochlear implantation were associated with higher probabilities of continued language delays, particularly within subdomains of grammar and pragmatics. Although early cochlear implantation, on average, appears to provide an advantage for spoken language development, it did not assure the development of spoken language in the typical range for all children by school age, nor did receiving a cochlear implant at up to 5 years of age eliminate the opportunity to develop spoken language skills within the typical range in the 4 to 6 years after implantation. INTRODUCTION Adoption of newborn hearing screening programs across the nation has resulted in early identification and confirmation of hearing losses in our very youngest and vulnerable populations. Early identification often results in early intervention using hearing technology assistance via hearing aids or cochlear implants, parental education programs, and speech-language therapy (Yoshinaga-Itano, 2014). The major goal of intervention is to capitalize on providing sensory, motor, and interactive exchanges at the earliest stages of communication development as a means of reducing the deleterious effects of auditory deprivation (Boons et al., 2013). Auditory deprivation results in changes to the neural architecture of the brainstem and cortex (Kral, 2013; Kral & Sharma, 2012). Changes to the neural architecture of the central auditory

nervous system often negatively impact the typical sprouting and pruning of neuronal connections. Consequently, alterations to the sprouting and pruning patterns influence whether neural patterns will continue to respond to auditory stimuli, respond to other sensory systems (for example, auditory neurons responding to visual stimuli), or demonstrate limited interactive skills reducing an individual’s ability to integrate information across and between channels of information (Weisberg, Koo, Crain, & Eden, 2012). In these instances, it becomes difficult for our very young babies to accomplish not only “bottom-up” processing, associated with the transfer of information from the peripheral ear to the cortex, but also in how the cortex processes information to realize other actions (e.g., “top-down” processing) (Nicholas & Geers, 2007).

10


11

2014 RESEARCH SYMPOSIUM

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

Several studies provide information regarding the speech and language development in young children with severe to profound hearing losses. Early work by the House Ear Institute with an early single channel electrode array, noted that children were able to distinguish different frequencies, receive environmental sounds, and distinguish among some sounds (Berliner & Eisenberg, 1987; Eisenberg et al., 2012; Weisberg et al., 2012). The introduction of multichannel electrodes and advanced signal processing strategies moved us further along the road and it is not unusual for young children to understand words and complex spoken language (Loizou, 1999a; Loizou, 1999b; Loizou, Dorman, & Tu, 1999). Acknowledgement of the tremendous strides made in cochlear implantation have recently been recognized by the awarding of the Lasker-DeBakey Clinical Medical Research Award to Ingeborg Hochmair, Graeme Clark, and Blake Wilson for their team of investigators who advanced our knowledge regarding the tonotopic nature of the cochlea (i.e., different spatial locations respond to different frequencies—high frequencies are analyzed at the base of the cochlea and low frequencies are analyzed at the apex) and invented signal processing strategies to enhance the translation of an auditory stimulus into an electrical stimulus which the brain interprets into meaningful communication (Roland & Tobey, 2013). HEARING TECHNOLOGY AND SPEECH PERCEPTION Cochlear implants consist of internal and external hardware (Loizou, 1999a). A microphone is used to collect acoustic signals and route them to a speech processor which uses sophisticated software to decompose and translate the acoustic signal into patterns of electrical pulse that are then transmitted across the skull to an internally placed electrode array in the cochlea. The electrical signals stimulate the auditory nerve and the brain learns to interpret the signals as meaningful speech or other signals. Cochlear implants differ from hearing aids in significant ways. Most digital hearing aids consist of a microphone, amplifiers, digital circuitry, filters, and receivers. Acoustic signals are collected by the microphone, routed to amplifiers, filtered, and routed to the receiver. Several factors may limit the success of hearing aids and lead to cochlear implantation as an alternative. These factors may be associated with a given child, such as dead regions in their cochlea that are unresponsive to sound, or to broader issues, such as intolerance to loud, compressed signals (Loizou, 1999b; Loizou et al., 1999). In some instances, long hearing aid trials are found to be associated with lower communication skills presumably because the lengthy hearing aid trials provide children with a less than optimum signal for the deteriorating auditory neural architecture (Berliner & Eisenberg, 1987; Eisenberg et al., 2012; Niparko et al., 2010; Tobey et al., 2013). Speech perception performance with cochlear implants appears related to several factors. Higher speech perception appears linked to shorter periods of auditory deprivation, greater amounts of residual hearing, and younger ages of implantation (Davidson, Geers,

Blamey, Tobey, & Brenner, 2011). Severe to profound hearing loss adversely affects auditory neural architecture and consequently, impedes the acquisition of spoken language in young children (Jiwani, Papsin, & Gordon, 2013). A wide range of performance levels are associated with hearing losses of this degree with some individuals relying on sign language, cued speech, and spoken language to communicate (Davidson et al., 2011; Richter, Eissele, Laszig, & Lohle, 2002; Tomblin, Barker, Spencer, Zhang, & Gantz, 2005). Hearing technology has advanced dramatically over the last two decades and speech perception with cochlear implants can be highly variable—with some individuals able to easily converse on the telephone without visual cues to other individuals who rely on visual cues and may need sign language to augment their communication. SPOKEN LANGUAGE DEVELOPMENT Spoken language acquisition in children with severe to profound hearing losses who use cochlear implants demonstrates a wide amount of variability. Children who receive their cochlear implants early in life experience shorter periods of auditory deprivation and experience longer periods of time using auditory stimuli for communication. Seminal work by Geers and colleagues noted spoken language performance was related to the age the hearing loss was identified and the amount of typical hearing experience a child might have before losing their residual hearing (Geers, 2004; Geers, Tobey, & Moog, 2011; Geers, Tobey, Moog, & Brenner, 2008). Moreover, their work indicated that a key factor was early intervention. Spoken language performance was related to having access to new technology, establishing a wide dynamic range, and maximizing residual hearing through the use of two ears (Davidson et al., 2011; Jiwani et al., 2013; Niparko et al., 2010). Performance levels of children tested in elementary school were a good predictor of where the children’s performance levels on spoken language measures would be in high school (Geers et al., 2008). Although spoken language levels often lagged behind performances noted for children with typical hearing, many children with cochlear implants achieve language skills within typical limits while many other children remain below average. Prospective evaluations of language growth over time have been undertaken by Niparko and colleagues as part of the Childhood Development after Cochlear Implantation study (Niparko et al., 2010; Tobey et al., 2013). In this large study, children who received cochlear implants in Miami, Baltimore, Ann Arbor, Chapel Hill, Los Angeles, and Dallas were assessed annually using the Comprehensive Assessment of Spoken Language (CASL), a measure of spoken language. The CASL provides a global language score associated with expressive and receptive language as well as specific information related to the acquisition of vocabulary, grammar, and pragmatic judgments (the use of language in different situations) (Tobey et al., 2013). Language measures of this type provide parents and clinicians with a standard score (average 100, standard deviation 15) which allows a given child’s performance to be


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

2014 RESEARCH SYMPOSIUM

compared to other students of comparable ages. Scores below 85 represent low performances outside the typical range and scores above 115 represent high scores above the typical range. Children participating in the study were implanted between age 6 months and age 4 years 11 months.

TIPS TO HELP YOUR CHILD MAXIMIZE PERFORMANCE What can parents do to help their children with cochlear implants maximize their spoken language performance?

STUDY RESULTS Language performance post-implantation was highest for children implanted at the very youngest implantation ages (Tobey et al., 2013). Scores were particularly higher for children implanted under 1 year of age and continued to drop until around 2.5 years of age. Language performance for children implanted after 2.5 years of age appeared to plateau when tested at 4, 5, and 6 years postimplantation, suggesting the children were neither gaining ground nor losing ground. Standard scores dropped rapidly for children implanted at later ages. Younger ages of cochlear implantation were associated with higher language performance post-implantation for children who received cochlear implants under 2.5 years of age. However, the age of cochlear implantation failed to play a role in children who were implanted between 2.5 and 5 years of age.

• Keep your technology current and embrace technological improvements as they arrive.

Closer examination of the language performance revealed a number of interesting patterns (Tobey et al., 2013). First, standard scores for receptive grammar were higher for children who received cochlear implants after age 2.5 years relative to the other language measures. Poorer performance in receptive grammar for children who received cochlear implants under 2.5 years of age was not as dramatic as that observed for children who received cochlear implants after 2.5 years of age. Second, expressive grammar scores were low even after the children had 6 years of experience using their devices. Poorer performance was particularly evident in children who received cochlear implants after age 2.5 years even when they had 5 or 6 years of experience using their device. Third, vocabulary remained high for many of the children implanted after 2.5 years of age with increased experience with their devices. Fourth, children’s ability to use language in new situations appropriately (pragmatic judgments) remains a challenge for all children who received cochlear implants at various ages even when the children had 6 years of experience using their devices. Children who received cochlear implants continue to experience difficulty appropriately requesting permission, requesting directions, or asking for assistance in new situations.

• Find a good audiologist, speech-language pathologist, and a Listening and Spoken Language Specialist (LSLS®).

• Provide enriching language experiences for your child. • Introduce your child to new language situations where they can use their old skills and apply them to new situations. • Reinforce spoken language use in multiple situations; introduce strategies for listening, decoding, and interpreting conversations. • Require your child to express their thoughts and ideas. • Practice listening and speaking if the goal is for your child to use this as a primary means of communication. ACKNOWLEDGEMENT This work was supported in part by the National Institutes of Health NIDCD R01DC04797 (J. K. Niparko, Principal Investigator), R01DC10494 (E. A. Tobey, Principal Investigator) and R01DC008335 (A. Geers, Principal Investigator).

12


13

2014 RESEARCH SYMPOSIUM

REFERENCES

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

Berliner, K. I., & Eisenberg, L. S. (1987). Our experience with cochlear implants: have we erred in our expectations? American Journal of Otolaryngology, 8, 222-229. Boons, T., Brokx, J., Frijns, J., Philips, B., Vermeulen, A., Wouters, J., et al. (2013). Newborn hearing screening and cochlear implantation: Impact on spoken language development. British-Ear Nose and Throat, Suppl 21, 91-98. Davidson, L. S., Geers, A. E., Blamey, P. J., Tobey, E. A., & Brenner, C. A. (2011). Factors contributing to speech perception scores in long-term pediatric cochlear implant users. Ear and Hearing, 32, 19S-26S. Eisenberg, L. S., Johnson, K. C., Martinez, A. S., Visser-Dumont, L., Ganguly, D. H., & Still, J. F. (2012). Studies in pediatric hearing loss at the House Research Institute. Journal of the American Academy of Audiology, 23, 412-421. Geers, A. E. (2004). Speech, language, and reading skills after early cochlear implantation. Archives of Otolaryngology-Head and Neck Surgery, 130, 634-638. Geers, A. E., Tobey, E. A., & Moog, J. S. (2011). Editorial: Long-term outcomes of cochlear implantation in early childhood. Ear and Hearing, 32, 1S. Geers, A. E., Tobey, E. A., Moog, J. S., & Brenner, C. (2008). Long-term outcomes of cochlear implantation in the preschool years: From elementary grades to high school. International Journal of Audiology, 47 Suppl 2, S21-S30. Jiwani, S., Papsin, B. C., & Gordon, K. A. (2013). Central auditory development after long-term cochlear implant use. Clinical Neurophysiology, 124, 1868-1880. Kral, A. (2013). Auditory critical periods: A review from system’s perspective. Neuroscience, 247, 117-133. Kral, A., & Sharma, A. (2012). Developmental neuroplasticity after cochlear implantation. Trends in Neurosciences, 35, 111-122. Loizou, P. C. (1999a). Introduction to cochlear implants. IEEE Engineering in Medicine and Biology Magazine, 18, 32-42.

Loizou, P. C. (1999b). Signal-processing techniques for cochlear implants. IEEE Engineering in Medicine and Biology Magazine, 18, 34-46. Loizou, P. C., Dorman, M., & Tu, Z. (1999). On the number of channels needed to understand speech. Journal of the Acoustical Society of America, 106, 2097-2103. Nicholas, J. G., & Geers, A. E. (2007). Will they catch up? The role of age at cochlear implantation in the spoken language development of children with severe to profound hearing loss. Journal of Speech, Language, and Hearing Research, 50, 1048-1062. Niparko, J. K., Tobey, E. A., Thal, D. J., Eisenberg, L. S., Wang, N. Y., Quittner, A. L., et al. (2010). Spoken language development in children following cochlear implantation. Journal of the American Medical Association, 303, 1498-1506. Richter, B., Eissele, S., Laszig, R., & Lohle, E. (2002). Receptive and expressive language skills of 106 children with a minimum of 2 years’ experience in hearing with a cochlear implant. International Journal of Pediatric Otorhinolaryngology, 64, 111-125. Roland, P. S., & Tobey, E. A. (2013). A tribute to a remarkably sound solution. Cell, 154, 1175-1177. Tobey, E. A., Thal, D., Niparko, J. K., Eisenberg, L. S., Quittner, A. L., & Wang, N. Y. (2013). Influence of implantation age on school-age language performance in pediatric cochlear implant users. International Journal of Audiology, 52, 219-229. Tomblin, J. B., Barker, B. A., Spencer, L. J., Zhang, X., & Gantz, B. J. (2005). The effect of age at cochlear implant initial stimulation on expressive language growth in infants and toddlers. Journal of Speech, Language, and Hearing Research, 48, 853-867. Weisberg, J., Koo, D. S., Crain, K. L., & Eden, G. F. (2012). Cortical plasticity for visuospatial processing and object recognition in deaf and hearing signers. NeuroImage, 60, 661-672. Yoshinaga-Itano, C. (2014). Principles and guidelines for early intervention after confirmation that a child is deaf or hard of hearing. Journal of Deaf Studies and Deaf Education, 19(2), 143-175.

Emily Tobey, Ph.D., is Professor and Nelle C. Johnston Chair at the University of Texas at Dallas as well as Vice Provost for Faculty Development in the Office of the Executive Vice President and Provost and Assistant Vice President in the Office of Diversity and Community Engagement. Tobey served as a Distinguished Lecturer at Texas Woman’s University and as a visiting research scholar at the Australian Bionic Ear and Hearing Research Institute of the University of Melbourne, the Pediatric Cochlear Implant Center of Nottingham, England, and the Department of Otolaryngology at the University of Montpellier, France. She was named Distinguished Academy Scientist by the Louisiana Academy of Sciences and Fellow of the American Speech-Language-Hearing Association and Acoustical Society of America. In 2001, she was named the University of Texas at Dallas Polykarp Kusch Lecturer—the highest honor an individual faculty member can receive from the university. She served as a Distinguished Lecturer for Sigma Xi, the nation’s honorary research society and she received the Honors of the American Speech-Language-Hearing Association, the association’s highest honor, for career achievements. She has held external funding from the National Institutes of Health and other external resources continuously since 1975 and has published over 100 manuscripts.


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

2014 RESEARCH SYMPOSIUM

Music Enjoyment and Cochlear Implant Recipients: Overcoming Obstacles and Harnessing Capabilities KATE GFELLER, PH.D. UNIVERSITY OF IOWA, SCHOOL OF MUSIC Cochlear implants (CIs), while remarkably effective in supporting spoken communication, are technically limited at conveying melody, harmony, and the rich and pleasing tone quality of music. In addition, deficits to the auditory system associated with hearing loss can have negative consequences for music listening. Despite these obstacles, many children who use CIs enjoy music, and some adult users have regained satisfying music experiences. This article summarizes research on technical, biological, and experiential factors that may limit music enjoyment; training or accommodations that have improved music perception and enjoyment; and practical approaches that CI users can use to harness their potential for music enjoyment. “ Music sounds like a cage full of squawking parrots.” “ The organ at church sounds like a train coming through the sanctuary.” “ I can hear the music, but it doesn’t make sense to me.” These are a few quotes from cochlear implant (CI) recipients, describing how music sounds through their CIs. Their comments highlight some of the obstacles that CI users face when using a hearing device designed primarily to support spoken communication. CIs are remarkably effective in conveying the salient features of speech, particularly in quiet listening environments. However, they are not well suited for conveying melodies, harmonies, and the beautiful tone qualities that people with typical hearing associate with music (Looi, Gfeller, & Driscoll, 2012). Because music is pervasive in most cultures (e.g., part of social events, religious services, etc.), CI users are likely to be exposed to music on a daily basis (e.g., Cross, 2004; Gfeller, 2008). Furthermore, music is associated with cultural and personal expression and emotional wellbeing; thus, the extent to which CI recipients are able to perceive and enjoy music has relevance to social integration and quality of life (Gfeller & Knutson, 2003). This article describes music’s salient characteristics; technical, biological, and experiential factors that undermine or support music perception; training to enhance music perception and enjoyment; and practical recommendations for optimizing music listening and enjoyment. STRUCTURAL COMPONENTS OF MUSIC When listening to music, we perceive complex and rapidly changing combinations of pitch, timbre, rhythm, and loudness (Looi et al., 2012). Rhythm, sometimes referred to as temporal patterns, is the sequential duration of notes. These temporal patterns comprise melodic rhythms (long and short notes in melodies), underlying beat (e.g., triple meter in waltzes), and tempo (e.g., fast, slow).

Loudness is the aspect of the acoustic signal associated with amplitude, or magnitude, of auditory sensation. In music, a wide and rapidly varying dynamic range (from barely audible to bombastically loud) is considered an important expressive element. Pitch, how high or low a note sounds, forms the basis of melodies (sequential pitch patterns) and harmony (concurrently presented pitches) (Looi et al., 2012). Timbre refers to the distribution of spectral energy that helps a listener differentiate musical instruments or singers performing the same pitch. For example, Bob Dylan’s voice is unmistakable because of its nasal quality. Trumpets are sometimes described as sounding brilliant, while clarinets have a hollow sound. Timbre not only helps listeners to identify who or which instrument they are hearing, but also contributes to the aesthetic beauty or entertainment value of music. Perceptual requirements for both pitch and timbre include adequate representation of spectrally complex aspects of the acoustical signal (sometimes referred to as the fine structure) (Limb & Roy, 2014). In real-world music, the attributes of pitch, timbre, rhythm, and loudness are typically organized in rapidly changing combinations. The listener engages in simultaneous perceptual processing of multiple input sources (e.g., groups of instruments playing many concurrent melodic and rhythmic patterns) (Looi et al., 2012). The following section will describe how effective the CI is in conveying these building blocks of music. COCHLEAR IMPLANTS AND MUSIC PERCEPTION Cochlear implants do not transmit a faithful representation of musical sounds. Rather, current generation implants usually remove the fine structure information in the sound waves and preserve the broad features of the temporal envelopes (Kong, Cruz, Jones, & Zeng, 2004; Kong, Stickney, & Zeng, 2005). In other words, CIs are most effective in conveying durational information, such as rhythm and tempo (Looi et al., 2012; McDermott, 2004).

14


15

2014 RESEARCH SYMPOSIUM

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

In musically relevant rhythmic tasks, many CI recipients have similar accuracy as listeners with typical hearing. This includes discrimination of tempo (e.g., slow or fast), meter (e.g., duple or triple), and simple rhythm patterns (e.g., reviews in Limb & Roy, 2014; Looi et al., 2012; McDermott, 2004). In functional terms, this means that CI users have similar capabilities as persons with typical hearing in tasks such as clapping or moving to a beat, or in playing percussion instruments (Hsiao & Gfeller, 2012). Research indicates that postlingually deafened CI users can use melodic rhythm to compensate for poor pitch perception in identifying familiar melodies (Gfeller, Turner, et al., 2002). Some CI users prefer compositions with prominent and clear rhythmic components over music emphasizing lyrical melodies and harmony (Au, Marozeau, Innes-Brown, Schubert, & Stevens, 2012). In summary, CI users can enhance their understanding and enjoyment of music by attending to its rhythmic elements. Although CI users can ‘hear’ musical sounds fairly easily, they are likely to have a restricted dynamic range due to technical limitations of the device as well as damage to the auditory system. This results in more difficulty hearing extremely quiet sounds or tolerating very loud sounds, or processing rapid changes in dynamics (e.g., soft to loud) (Limb & Roy, 2014; Looi et al., 2012). Pitch perception of CI users is significantly less accurate than that of listeners with typical hearing because of technical limitations of the CI as well as damage to the auditory system (e.g., Limb & Roy, 2014; Looi et al., 2012). On average, CI recipients are less accurate than listeners with typical hearing on detection of small pitch changes, determining if one pitch is higher or lower than another (pitch ranking), and pitch patterns recognition or discrimination (Gfeller et al., 2007; Kong et al., 2004; Looi et al., 2012). In functional terms, some CI recipients may hear no pitch change from one note to the next; others may perceive changes in pitch, but the size of pitch change may be compressed or distorted. The poor representation of pitch conveyed through the CI sheds light on comments quoted at the introduction of this paper, such as “I can hear the music, but it doesn’t make any sense.” Interestingly, while some individuals who use CIs have improved pitch perception as a result of upgrades in signal processing, research from large groups of CI users reveals no statistically superior pitch perception for specific models of conventional internal arrays (22 mm electrode) or commercially available processing strategies (e.g., Gfeller et al., 2008; Gfeller, Jiang, Oleson, Driscoll, & Knutson, 2010; Gfeller, Turner, et al., 2010; Kong et al., 2004; Looi et al., 2012; McDermott, 2004). However, adult CI users who have sufficient residual hearing to benefit from well-fitted hearing aids worn with the CI (bimodal stimulation) have shown perception superior to CI use alone (Dorman, Gifford, Spahr, & McKarns, 2008; El Fata, James, Laborde, & Fraysse, 2009; Gfeller et al., 2008, 2010; Looi et al., 2012). Thus, well-fitted hearing aids worn with CIs can sometimes enhance music perception.

Because melodies and harmonies are made up of pitch patterns, it is not surprising that CI recipients are significantly less accurate than persons with typical hearing on perception of melodic and harmony patterns (e.g., for reviews, see Limb & Roy, 2014; Looi et al., 2012; McDermott, 2004). However, accuracy can be enhanced by contextual cues (e.g., watching the singer, melodies associated with special events such as “Happy Birthday”) (Olszewski, Gfeller, Froman, Stordahl, & Tomblin, 2005). This is illustrated by the following comment from a CI user: “It does help… if I know what it’s [i.e., music] supposed to sound like. For example, the ‘Star Spangled Banner’ started to sound fairly normal about a week into the Olympics, but I think this is my brain filling in the missing pieces.” Perhaps the most challenging pitch-based task for many CI recipients is accurate production of pitch, such as singing in tune, or tuning an instrument. Objective testing of singing by groups of pediatric CI recipients (Nakata, Trehub, Mitani, & Kanda, 2006; Xu et al., 2009) indicates that most children with CIs are significantly less accurate than children with typical hearing in matching pitches and singing the melodic contour in tune. Timbre allows the listener to differentiate between two instruments or singers playing/singing the same note at the same level of loudness (Looi et al., 2012). Timbral blend is the term used to describe multiple instruments or voices producing notes simultaneously. CI users are significantly less accurate than listeners with typical hearing in recognizing musical instruments by sound quality alone, though single musical instruments tend to be easier to identify than more complex blends (reviews in Limb & Roy, 2014; Looi et al., 2012; McDermott, 2004). However, CI signal processing provides sufficient spectral detail for the listener to detect differences in sound quality between two contrasting sound sources (e.g., detecting a difference between a flute and a piano). In everyday life, listeners are seldom required to identify musical instruments. However, tone quality (e.g., brilliant, smooth, haunting, etc.) is an important part of music appreciation. The term appraisal is often used in research evaluating timbre on dimensions of pleasantness or specific characteristics (e.g., rough vs. smooth). As a group, CI recipients appraise the tone quality of musical instruments less pleasantly than do listeners with typical hearing (reviews in Looi et al., 2012; McDermott, 2004). DIFFERENCES AMONG CI USERS The previous paragraphs have focused on average, or typical, perceptual capabilities documented across groups of CI users (e.g., Looi et al., 2012; McDermott, 2004). Individual CI recipients, however, differ considerably on perception and appraisal of those aspects of music for which pitch and timbre are salient (Gfeller et al., 2008; Gfeller et al., 2010; Looi et al., 2012). Some, such as those quoted in the introduction of this paper, describe music as little more than obnoxious noise. However, others have remarkable levels of accuracy and enjoyment, despite the technical limitations


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

of the CI. For example, one CI user from the Iowa Cochlear Implant Clinical Research Center, after adjusting to her device, described music with her CI this way: “I think those of us who were intimately connected to music before our hearing losses notice the little ways that music is part of the warp and woof of life and relish our recovery of something priceless.” Interestingly, appraisal and enjoyment of music is not merely a function of perceptual accuracy (Gfeller et al., 2008; Gfeller, Witt, Spencer, Stordahl, & Tomblin, 1999; Wright & Uchanski, 2012). There are individuals who have above average perceptual acuity who nevertheless dislike music through the CI. In contrast, there are CI users with relatively poor acuity who truly enjoy music. Individual expectations can have an important impact upon music enjoyment (Gfeller, Mehr, & Witt, 2001). This comment from an adult who is postlingually deaf illustrates the importance of expectations in relation to music appreciation: “Initially it was very disappointing to listen to music with my CI. … I have had to adapt. [After] accepting a ‘new sound’ … it can be extremely enjoyable to listen to music now … it’s just different.” In addition to expectations, other factors contribute to variable perception and enjoyment among CI users, including: residual hearing, hearing aid use, current age, how efficiently the brain processes incoming auditory information, onset of hearing loss, experiential circumstances, and musical training (Gfeller et al., 2008, 2010; Looi et al., 2012; McDermott, 2004). For example, implant recipients with greater damage to the auditory system may enjoy less benefit from enhanced signal processing strategies designed to convey greater fine structure (e.g., Fu, Shannon, & Wang, 1998) or from the acoustic stimulation presented via a hearing aid (bimodal hearing) (El Fata et al., 2009; Looi et al., 2012). Younger CI users may have the advantage of greater neural plasticity, which contributes to more efficient use of new information (Gfeller, Driscoll, Kenworthy, & Van Voorst, 2011; Hsiao and Gfeller, 2012). CI users of all ages differ in how efficiently their brains discriminate and process sounds (Gfeller et al., 2008). Regarding onset of hearing loss, CI users who are postlingually deaf can use contextual cues (e.g., using memory of music) to support understanding (Gfeller et al., 2001). Postingually deafened CI users tend to appraise musical sound quality poorly in contrast with perceptions prior to hearing loss, while children whose entire experience with music has been through a CI may have less stringent expectations regarding what constitutes aesthetically pleasing music (Gfeller, Witt et al., 1999; Hsiao and Gfeller, 2012). Music listening can also be influenced by environmental circumstances. For example, an overly reverberant room or poor sound equipment can make listening more difficult, while listening can be enhanced through visual cues such as watching the singer, or reading along with notation or song lyrics (Gfeller, Christ, et al., 2000; Gfeller et al., 2008, 2010).

2014 RESEARCH SYMPOSIUM

MUSIC TRAINING In recent years, there has been growing interest in the impact of musical training. As noted previously, CI users are less accurate than persons with typical hearing on perception of pitch and timbre. However, significant perceptual improvements can occur as a result of systematic training (e.g., Driscoll, 2012; Fu & Galvin, 2007; Gfeller et al., 2011; Gfeller, Witt, Stordahl, Mehr, & Woodworth, 2000; Gfeller, Witt, et al., 2002; Rocca, 2012). In addition to improved music perception and enjoyment (e.g., melody recognition, sound quality ratings) studies with listeners with typical hearing indicate that music training may enhance the efficiency with which the auditory pathways process speech as well as musical sounds. Researchers hypothesize that the heightened fine-grained frequency discrimination required to perceive music may, over time, improve perceptual skills that generalize to perception of more complex speech tasks, including vocal inflection, talker identification, and speech perception in noisy listening conditions (e.g., Chermak, 2010; Kraus & Skoe, 2009; Kraus, Skoe, Parbery-Clark, & Ashley, 2009; Shahin, 2011). In our lab, we are currently comparing adults with typical hearing with either little (if any) or extensive musical training on their neural and behavioral responses to various musical stimuli. For example, preliminary data indicate that, those with extensive musical training have more accurate behavioral measures for pure tone pitch discrimination at 800 and 1600 Hz (Satterthwaite p-value, p < .05). These observations with listeners with typical hearing from various centers, including our own, have prompted speculation that music training may benefit CI users, enhancing their extraction of finestructure information from complex aspects of speech and music. However, given the very atypical auditory signal conveyed by the CI, systematic evaluation of music training is required in order to better assess the potential benefit to CI recipients (Shahin, 2011). Our research center is one of several currently involved in examining the impact of musical training on auditory processing of CI users. In summary, the CI is not ideally suited for conveying several structural attributes of music, although acuity and appraisal vary considerably among CI users. Furthermore, some aspects of music listening can be enhanced through accommodation or music training. These research findings suggest practical strategies for CI users, which are noted on the following page.

16


17

2014 RESEARCH SYMPOSIUM

Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

PRACTICAL STRATEGIES TO ENHANCE MUSIC LISTENING AND PARTICIPATION Perhaps the most prudent advice one can offer regarding music and CIs is a balanced message: Music will not be the same as it was prior to implantation, but many people can improve through accommodation or focused practice over time. The word “music” encompasses complex and diverse structural combinations. Furthermore, perceptual and production requirements differ greatly, depending upon how one engages with music (e.g., listening, singing, playing an instrument). Thus, one solution will not fit all individuals or circumstances; trial and error will help determine which aspects of music are most enjoyable, or have potential for improvement. The personal priorities and motivation of each user and their family should be taken into account, given that improved music perception requires some dedicated effort over time. A combination of training and accommodation can address different aspects of music engagement. Training (focused listening exercises) is geared toward improving neural efficiency and learning to associate particular sounds with specific musical entities. Accommodations are practical strategies intended to compensate for the technical limitations of the device (e.g., quiet listening environments or using visual cues) or the auditory system. The following list offers practical strategies for enhancing music perception and enjoyment for CI users (Gfeller et al., 2001). •E stablish realistic expectations. Listening. Music will not sound as it did prior to deafness and/ or cochlear implantation, and one should avoid comparisons with “star” users. However, many pediatric CI users do enjoy some aspects of music (Gfeller et al., 1999) and many adult CI users have been able to reestablish listening enjoyment as long as new expectations are established. As is true for most listeners with typical hearing, CI users may discover that some musical experiences may be more enjoyable than others (Gfeller & Knutson, 2003). Making music. Some pediatric and adult CI users have found greater satisfaction in playing instruments that do not require ongoing tuning (e.g., as is required for violin, guitar), and which provide visual and kinesthetic cues that supplement audition (e.g., the piano). In music instruction, it is important to remember that aural limitations for pitch perception are due in large measure to the characteristics of the CI, and do not reflect lack of effort or intelligence. Instructional goals and objectives should be individualized and reassessed periodically to reflect the individual’s current capabilities, as well as to identify realistic avenues for growth and development. Several sources in the references of this article provide more information regarding accommodations for music instruction (Gfeller et al., 2011; Gfeller et al., 2001; Hsiao & Gfeller, 2012). • L isten to music in an optimal environment. Avoid extremely reverberant rooms or situations having many distractions.

• I mprove perceptual accuracy and sound quality through listening practice. Change requires repeated exposure to musical sounds over time. Practice is likely to be more effective and less frustrating if completed when the listener is rested and alert, and in short rehearsals distributed over several days/weeks. •B egin by attending to those structural attributes that are most effectively transmitted through the CI. For example, initially choose music activities for which rhythm is an important component (e.g., focusing on the rhythm while listening, moving to music). As skills increase, focus on more challenging tasks. •U se contextual cues. These include watching the musician, reading the notation or song lyrics, or prior knowledge of the music to piece together the sounds. SUMMARY Music perception is an important indicator of CI benefit, both with regard to social integration and quality of life. Although music as heard through a CI is not the same as typical hearing, through judicious choices and persistent practice, many CI users can achieve more satisfactory engagement with music in their daily lives. What’s more, enhanced music perception may have carryover to more effective navigation of the complex acoustic environment encountered in everyday life. Portions of this paper were supported by grant 2 P50 DC00242 from the NIDCD, NIH; grant 1R01 DC000377 from the NIDCD, grant RR00059 from the General Clinical Research Centers Program, NCRR, NIH; and the Iowa Lions Foundation. Thanks are due to Virginia Driscoll for assistance in the preparation of this manuscript. REFERENCES

Au, A., Marozeau, J., Innes-Brown, H., Schubert, E., & Stevens, C. (2012). Music for the cochlear implant: Audience response to six commissioned compositions. Seminars in Hearing, 33(4), 335-345. Chermak, G. (2010). Music and auditory training. The Hearing Journal, 63(4), 57-58. Cross, I. (2004). Music, cognition, culture, and evolution. In I. Peretz & R. Zatorre (Eds.), The Cognitive Neuroscience of Music (pp. 42-56). Oxford, England: Oxford University Press. Dorman, M. F., Gifford, R. H., Spahr, A. J., & McKarns, S. A. (2008). The benefits of combining acoustic and electric stimulation for the recognition of speech, voice, and melodies. Audiology and Neurotology, 13(2), 105-112. Driscoll, V. (2012). The effects of training on recognition of musical instruments by adults with cochlear implants. Seminars in Hearing, 33(4), 410-418. El Fata, F., James, C., Laborde, M., & Fraysse, B. (2009). How much residual hearing is ‘useful’ for music perception with cochlear implants? Audiology and Neurotology, 14, 14-21. Fu, Q., & Galvin, J. (2007). Perceptual learning and auditory training in cochlear implant recipients. Trends in Amplification, 11(3), 193.


Sunday, June 29 | 8:00 a.m. – 11:30 a.m. | Walt Disney World Swan and Dolphin | Orlando, Fla. | #AGBell2014

Fu, Q., Shannon, R. V., & Wang, X. (1998). Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing. Journal of the Acoustical Society of America, 104, 3586-3596. Gfeller, K. E. (2008). Music: A human phenomenon and therapeutic tool. In W. B. Davis, K. E. Gfeller & M. H. Thaut (Eds.), An Introduction to Music Therapy Theory and Practice. (3rd ed., pp. 41-75). Silver Spring, MD: American Music Therapy Association. Gfeller, K. E., Christ, A., Knutson, J. F., Witt, S., Murray, K. T., & Tyler, R. S. (2000). Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients. Journal of the American Academy of Audiology, 11, 390-406. Gfeller, K. E., Driscoll, V., Kenworthy, M., & Van Voorst, T. (2011). Music therapy for preschool cochlear implant recipients. Music Therapy Perspectives, 29(1), 39-49. Gfeller, K. E., Jiang, D., Oleson, J., Driscoll, V., & Knutson, J. F. (2010). Temporal stability of music perception and appraisal scores of adult cochlear implant recipients. Journal of the American Academy of Audiology, 21(1), 28-34. Gfeller, K. E., & Knutson, J. F. (2003). Music to the impaired or implanted ear: Psychosocial implications for aural rehabilitation. ASHA Leader, 8(8), 12-15. Gfeller, K. E., Mehr, M., & Witt, S. (2001). Aural rehabilitation of music perception and enjoyment of adult cochlear implant users. Journal of the Academy for Rehabilitative Audiology, 34(17), 27. Gfeller, K. E., Oleson, J., Knutson, J. F., Breheny, P., Driscoll, V., & Olszewski, C. (2008). Multivariate predictors of music perception and appraisal by adult cochlear implant users. Journal of the American Academy of Audiology, 19(2), 120-134. Gfeller, K. E., Turner, C., Oleson, J., Kliethermes, S., & Driscoll, V. (2012). Accuracy of cochlear implant recipients on speech reception in background music. Annals of Otology, Rhinology & Laryngology, 121(12), 782-791. Gfeller, K. E., Turner, C., Oleson, J., Zhang, X., Gantz, B., Froman, R., & Olszewski, C. (2007). Accuracy of cochlear implant recipients on pitch perception, melody recognition and speech reception in noise. Ear and Hearing, 28(3), 412. Gfeller, K. E., Turner, C., Woodworth, G., Mehr, M., Fearn, R., Witt, S., & Stordahl, J. (2002). Recognition of familiar melodies by adult cochlear implant recipients and normal-hearing adults. Cochlear Implants International, 3, 31-55. Gfeller, K. E., Witt, S., Adamek, M., Mehr, M., Rogers, J., Stordahl, J., & Ringgenberg, S. (2002). Effects of training on timbre recognition and appraisal by postlingually deafened cochlear implant recipients. Journal of the American Academy of Audiology, 13, 132-145. Gfeller, K. E., Witt, S. A., Spencer, L., Stordahl, J., & Tomblin, J. B. (1999). Musical involvement and enjoyment of children using cochlear implants. The Volta Review, 100(4), 213-233.

2014 RESEARCH SYMPOSIUM

Gfeller, K. E., Witt, S., Stordahl, J., Mehr, M., & Woodworth, G. (2000). The effects of training on melody recognition and appraisal by adult cochlear implant recipients. Journal of the Academy of Rehabilitative Audiology, 33, 115-138. Hsiao, F., & Gfeller, K. E. (2012). Music perception of cochlear implant recipients with implications for music instruction: A review of literature. Update: Applications of Research in Music Education, 30, 5-10. Kong, Y. Y., Cruz, R., Jones, J. A., & Zeng, F. G. (2004). Music perception with temporal cues in acoustic and electric hearing. Ear and Hearing, 25(2), 173-185. Kong, Y. Y., Stickney, G. S., & Zeng, F. G. (2005). Speech and melody recognition in binaurally combined acoustic and electric hearing. Journal of the Acoustical Society of America, 117(pt. 1), 1351-1361. Kraus, N., & Skoe, E. (2009). New directions: Cochlear implants. Annals of the New York Academy of Sciences, 1169(1), 516-517. doi:10.1111/j.17496632.2009.04862.x Kraus, N., Skoe, E., Parbery-Clark, A., & Ashley, R. (2009). Experienceinduced malleability in neural encoding of pitch, timbre, and timing. Annals of the New York Academy of Sciences, 1169, 543-557. Limb, C., & Roy, A. T. (2014). Technological, biological, and acoustical constraints to music perception in cochlear implant users. Hearing Research, 308, 13-26. Looi, V., Gfeller, K. E., & Driscoll, V. (2012). Music appreciation and training for cochlear implant recipients: A review. Seminars in Hearing, 33(4), 307-334. McDermott, H. J. (2004). Music perception with cochlear implants: A review. Trends in Amplification, 8(2), 49-81. Nakata, T., Trehub, S. E., Mitani, C., & Kanda, Y. (2006). Pitch and timing in the songs of deaf children with cochlear implants. Music Perception, 24(2), 147-154. Olszewski, C., Gfeller, K. E., Froman, R., Stordahl, J., & Tomblin, B. (2005). Familiar melody recognition by children and adults using cochlear implants and normal hearing children. Cochlear Implants International, 6(3), 123-140. Rocca, C. (2012). A different musical perspective: Improving outcomes in music through habilitation, education and training for children with CIs. Seminars in Hearing, 33(4), 425-433. Shahin, A. J. (2011). Neurophysiological influence of musical training on speech perception. Frontiers in Psychology, 2(126). doi: 10.3389/ fpsyg.2011.00126 Wright, R., & Uchanski, R. M. (2012). Music perception and appraisal: cochlear implant users and simulated cochlear implant listening. Journal of the American Academy of Audiology, 23, 350-365. Xu, L., Zhou, N., Chen, X., Li, Y., Schultz, H. M., Zhao, X., & Han, D. (2009). Vocal singing by prelingually-deafened children with cochlear implants. Hearing Research, 255, 129-134.

Kate Gfeller, Ph.D., is the Russell and Florence Day Professor of Liberal Arts and Sciences in the School of Music and the Department of Communication Sciences and Disorders at the University of Iowa. Gfeller is a member of the Iowa Cochlear Implant Clinical Research Team in the Department of Otolaryngology—Head and Neck Surgery at the University of Iowa Hospitals and Clinics. As a part of that multidisciplinary team, her research on music perception has been funded by the National Institutes of Health, the Office of Special Education and Rehabilitation, and the Department of Defense. For 30 years, she has worked as part of multidisciplinary teams in conducting basic and translational research and providing music therapy services for children and adults with hearing losses. She has investigated perception and enjoyment of music, with an emphasis on real-world complex sounds, as well as music-based programs for auditory skill development. This includes applications intended to promote more meaningful involvement in social and educational settings.

18


ListeningandSpokenLanguage.org

MARK YOUR CALENDARS 2015 Listening and Spoken Language Symposium July 9-11, 2015 Baltimore Marriott Waterfront Baltimore, MD

2016 AG Bell Convention June 30-July 3 Sheraton Denver Downtown Hotel Denver, CO


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.