16 minute read

Progress Report Recent

Recent Research by Hearing Health Foundation Scientists, Explained

Molecular Barriers to Overcome for Hair Cell Regeneration in the Adult Mouse Cochlea

Advertisement

In this image from eLife, scanning electron micrographs of reprogrammed hair cells show similar hair cell-like structural features. Arrows indicate individual reprogrammed hair cells.

About 430 million people around the world experience disabling hearing loss. Hearing loss can happen when any part of the ear or the nerves that carry information on sounds to the brain do not work in the usual way.

For instance, damaged hair cells in the inner ear can lead to hearing loss. “These cells allow the brain to detect sounds,” says Amrita Iyer, Ph.D., the first author of a November 2022 paper published in eLife. During this project, Iyer was a graduate student in the lab of Andrew Groves, Ph.D., a professor and Vivian L. Smith Endowed Chair in Neuroscience and of molecular and human genetics at Baylor College of Medicine.

Hair cells are generated during normal development, but this ability is progressively lost after birth as mammals mature. “When hair cells are lost in mature animals, the cells cannot be naturally regenerated, which can lead to permanent hearing loss,” Iyer says.

Transcription factors promote the expression of certain genes and prevent the expression of others. By changing the pattern of gene expression, the team hoped to lead cells to a state in which they would regenerate hair cells in mature animals, like what happens during development.

“We compared the reprogramming efficiency of the hair cell transcription factor ATOH1 alone or in combination with two other hair cell transcription factors, GFI1 and POU4F3, in mouse non-sensory cells in the cochlea, the part of the inner ear that supports hearing,” Iyer says. “We did this at two timepoints—8 days and 15 days after birth, assessing the extent of hair cell regeneration in mice.”

To study the structure of the hair cell bundles generated by reprogramming, Iyer collaborated with the lab of Yehoash Raphael, Ph.D., at the University of Michigan to perform scanning electron microscopy imaging on the cochleae of mice conditionally overexpressing these transcription factors. The images clearly showed that the hair cell bundles were in accordance to what is observed on inner hair cells during development. Further studies showed that these cells also had some characteristics that suggested that they were capable of sensing sound.

“We found that although expressing ATOH1 with hair cell transcription factors GFI1 and POU4F3 can increase the efficiency of hair cell reprogramming in older animals compared to ATOH1 alone or GFI1 plus ATOH1, the hair cells generated by reprogramming at 8 days of age—even with three hair cell transcription factors—are significantly less mature than those generated by reprogramming at postnatal day one,” Iyer says.

“We suggest that reprogramming with multiple transcription factors is better able to access the hair cell differentiation gene regulatory network, but that additional interventions may be necessary to produce mature and fully functional hair cells.”

Transcription factor–mediated reprogramming and its underlying biology may enable fine-tuning of gene therapy approaches for hearing restoration. —Ana María Rodríguez, Ph.D., Baylor College of Medicine

This originally appeared on the Baylor College of Medicine website. Coauthors and Emerging Research Grants (ERG) alumni Andy Groves, Ph.D. (far left), and Yehoash Raphael, Ph.D., are members of HHF’s Hearing Restoration Project, along with the late Neil Segil, Ph.D.

This figure from Cell Reports shows the different groups of cells in the typical chicken utricle. Learn more at hhf.org/blogs/cell-type-identity-ofthe-chick-balance-organ.

Cell-Type Identity of the Chick Balance Organ

A major goal of the work conducted by Hearing Health Foundation’s Hearing Restoration Project (HRP) in recent years has been to obtain a complete set of genes that are active in each cell type in the chicken inner ear. In 2021 my laboratory provided an inventory of all genes expressed in the chicken hearing organ, also known as the basilar papilla.

Now, one year later, we have inventoried the genes expressed in the chicken utricle, an important balance organ. In our study in Cell Reports in September 2022, we identified genes that define three different sensory hair cell types and two distinct supporting cell groups.

The avian utricle, in contrast to the basilar papilla, continuously produces new hair cells throughout the animal’s life. Our team identified the sequence of gene expression changes in supporting cells during this natural process of new hair cell production. This knowledge will be important in future comparative studies where the HRP plans to compare the repertoire of genes active during hair cell regeneration in chicken and zebrafish with existing gene expression in the mouse and human inner ear.

Hair cell regeneration does not naturally happen in mice and humans, and therefore, it will be important to identify the genes that are missing in supporting cells from these species. This knowledge, in turn, will guide the selection of candidate genes for potential future therapeutic approaches.

This paper concludes a string of three publications by our lab in 2021 and 2022 that establish a baseline for investigating the molecular mechanisms of auditory hair cell regeneration in chickens. Although still a work in progress, our team will present multiple abstracts at the February 2023 Association for Research in Otolaryngology meeting in Orlando, where we will discuss novel findings about the molecular pathways that initiate auditory hair cell regeneration in birds.

This is an exciting time for us and the HRP consortium because it demonstrates that the long-term investment into a systematic research approach with different animal models was worthwhile. We have now identified the first events that lead to proliferative hair cell regeneration in birds, which provides new leads that can be translated to mice and ultimately to humans.

The ability to monitor 20,000 genes in parallel in every individual cell in the regenerating inner ear is extremely powerful. It also provides an incredible challenge to identify the significant genes that ultimately can be targeted with a drug-based approach. We are off to a promising start, and we tackle this large puzzle with an incredibly motivated team. We are thankful for the support from Hearing Health Foundation that enables this work. —Stefan Heller, Ph.D.

Hearing Restoration Project member and prior ERG scientist Stefan Heller, Ph.D., is a professor of otolaryngology–head & neck surgery at Stanford University.

Support our research: hhf.org/donate.

Apparent Benefits of Cochlear Implantation Before Age 2

Early childhood deafness generally has a profound and lasting impact on educational and career attainments later in life because it reduces the quality and quantity of language experience, thereby inhibiting the acquisition of spoken language and literacy abilities. Cochlear implants, however, can enhance the quality and quantity of linguistic input and facilitate the development of spoken language and literacy skills in early onset childhood deafness.

We studied the categorization of speech sounds in two groups of adult cochlear implant (CI) users with earlyonset deafness and a group of 21 hearing controls. Thirty of the CI users were implanted before age 4 (early CI group), and 21 were implanted after age 7 (late CI group). We evaluated listeners’ identification and discrimination of speech sounds along a voicing continuum consisting of the syllables /ba/ vs. /pa/, and a place of articulation continuum consisting of the syllables /ba/ vs. /da/.

Our paper in the Journal of Speech, Language, and Hearing Research in November 2022 showed that, for the /b/–/p/ contrast, when voice onset occurs at about 20 milliseconds (ms) or less after release of air pressure behind the lips, CI users with early onset deafness generally perceive the sound as /b/. But when voice onset occurs at longer intervals, it is generally perceived as /p/.

The accompanying figure illustrates the probability of an average listener in each group identifying the stimuli on the /ba/–/pa/ continuum as /ba/. The /ba/–/pa/ category boundary for each listener group is denoted by a dot on their curve. Stimuli to the left and above the /b/–/p/ category boundary were identified as /ba/, while stimuli to the right and below the boundary were identified as /pa/.

For each group, the /b/–/p/ contrast was quite sharp, as reflected in the steep slope of the curve in the vicinity of the /b/–/p/ category boundary. The average /p/–/b/ boundary was located at 17.3 ms for the hearing listeners and only 5 to 6 ms longer for listeners in the CI groups.

Distinctive identification and discrimination functions were observed for each group on both the voicing and place of articulation continua, indicating similar performance between CI users and hearing controls and between the early and late CI groups. Compared with hearing participants, however, the CI groups generally displayed longer/higher category boundaries, shallower identification function slopes, reduced identification consistency, and reduced discrimination performance.

The results also showed that earlier implantation was associated with better phoneme categorization (PC) performance within the early CI group, but not the late CI group. Within the early CI group, earlier implantation age but not PC performance was associated with better speech recognition. Conversely, within the late CI group, better PC performance but not earlier implantation age was associated with better speech recognition.

This indicates that, within the early CI group, the age a child receives a CI affects their eventual development of phoneme (speech sound) categories in adulthood, and that the linguistic categories within each CI group closely resemble those of hearing controls. Importantly, individuals implanted prior to age 2 seem to acquire top-down processing abilities that enable them to become more proficient at speech recognition than those implanted in later years, whose speech recognition generally appears to proceed from the bottom up.

These findings provide valuable insight into the interaction between the learner’s age (developmental period effect) and their linguistic experience (quality and quantity of linguistic input) in early development. We find a midsize developmental period effect occurring before age 4, which partly determines PC ability, as well as another midsize developmental period effect associated with the acquisition and use of top-down linguistic processing, occurring around age 2. —Joseph Bochner, Ph.D.

This figure from the Journal of Speech, Language, and Hearing Research shows the results of a speech identification task. Learn more at hhf.org/blogs/apparent-benefits-of-cochlearimplantation-before-age-2.

A 2017 ERG scientist generously funded by Royal Arch Research Assistance, Joseph Bochner, Ph.D., is a professor at the National Technical Institute for the Deaf at Rochester Institute of Technology.

Nicotine Injections Reduce Age-Related Changes in the Older Mouse Brain

Older adults can experience difficulties understanding speech in challenging environments, even in the absence of significant hearing loss. This indicates deteriorated processing in the aging central auditory system, and this may lead to reduced communication and social activities. Epidemiological studies have demonstrated a link between the degradation of communication, social activities, and declining cognitive abilities.

To improve central auditory processing in older adults, it may be necessary to use a combination of hearing aids and behavioral and pharmacological treatment approaches. Currently no pharmacological approach exists to improve central auditory processing, and it is also unclear if neural circuits in older adults can be reactivated to a “young” level.

Neural oscillations—electrical activity in the central nervous system that occur spontaneously and in response to stimuli—at specific frequency bands are associated with cognitive functions and can identify abnormalities in cortical dynamics. In this study published in Neurobiology of Aging in December 2022, Khaleel Razak, Ph.D., and team analyzed electroencephalogram (EEG) signals recorded from the auditory and frontal cortex of freely moving mice across young, middle, and old ages, and found multiple robust and novel age-related changes in these oscillations.

The paper notes that prior research has shown that manipulating specific neuron signaling pathways by nicotine administration can enhance sensory acuity, reaction time, and attentional and cognitive performance. Other research showed that these pathways are impaired in the aging auditory system. Therefore, the team hypothesized that nicotine administration would reduce age-related impaired cortical processing in old mice.

Razak and colleagues found that an acute injection of nicotine (0.5 mg/kg) in old mice partially or fully reversed the age-related changes in EEG responses. Nicotine had no effect on auditory brainstem responses, suggesting the effects occur more centrally.

Importantly, their data suggest that the auditory circuits that generate “young” responses to sounds are present in old mice, and can be activated by nicotine.

The researchers write that a number of nicotine-like, non-addictive drugs that target cognitive deficits in Alzheimer’s disease and other age-related disorders have been developed. This data in aging mice strongly suggests that topical or oral nicotine or nicotine-like substances may be profoundly beneficial for aging humans with central auditory processing disruptions.

This is adapted from the paper in Neurobiology of Aging. A 2009 and 2018 ERG scientist, Khaleel A. Razak, Ph.D., is a professor of psychology and the director of the graduate neuroscience program at the University of California, Riverside. Razak’s 2018 grant was generously funded by Royal Arch Research Assistance.

Support our research: hhf.org/donate.

Balance Control in People With Hearing or Vestibular Loss in One Ear

Recent studies have demonstrated a relationship between hearing loss and an increased risk of falls and reduced balance performance, but it is unclear whether this applies to people with hearing loss in one ear (unilateral) and typical hearing on the other ear. While unilateral hearing loss was once considered to have no functional limitations, data now suggests that it may lead to participation restrictions in social, family, and work settings.

We are investigating three theories regarding balance in individuals with hearing loss: 1) When healthy individuals perform balance tasks in complex sensory environments, they tend to respond to sensory perturbations (e.g., moving visual environments) by increasing their body sway. If a sense is impaired (e.g., such as loss of balance or vision) people tend to over-rely on the other senses for balance. People with vestibular loss tend to be visually dependent and sway more than healthy controls when the visual environment is moving. Do people with hearing loss rely more on visual and somatosensory input because of the loss of auditory input? 2) The auditory and vestibular system are anatomically very close. Do people with hearing loss rely more on visual and somatosensory input because they also have undiagnosed vestibular loss? 3) Individuals with hearing loss compensate for the loss of auditory cues by using a “feed-forward” mechanism for balance—relying on prior expectation and motor planning rather than responding to dynamic sensory cues. If so, people with hearing loss would not increase their sway with changing visual load as expected in healthy controls.

For our study published in PLOS ONE in October 2022, we recruited people with unilateral vestibular loss or with unilateral hearing loss, along with healthy controls. The mean ages for the three groups ranged from 48 to 62.

We analyzed postural sway (from a force platform) and head sway (from a virtual reality headset) in response to two levels of auditory cues (none or rhythmic sounds via headphones), visual cues (static or dynamic), and somatosensory cues (floor or foam) within a simulated, virtual three-wall display of stars. We found no differences with the rhythmic auditory cues. The effect of foam was magnified in the unilateral balance loss group compared with controls for front to back and side to side postural sway, and all head directions except for side to side.

The vestibular loss group had significantly larger front to back and side-to-side postural and head sway on the static scene compared with controls. Differences in pitch, yaw, and roll emerged between the balance loss group and controls only with sensory perturbations.

The unilateral hearing loss group did not increase their postural sway and head movement with the increased visual load as much as controls did, particularly when standing on the foam. They also did not increase their side-to-side sway with the foam as much as controls did.

These findings support theory #3, suggesting that individuals with hearing loss in one ear employ a compensatory strategy of conscious control of balance. Overall, in this study patients with vestibular loss disorders had exaggerated responses to sensory stimuli, as expected, while the unilateral hearing loss patients’ response to stimuli was less reactive, demonstrating a stiffer posture.

Patients with hearing loss in one ear appear to have more conscious control over their response to sensory cues in their environment, resulting in a more deliberate control of balance with less degrees of freedom to respond to changes in the environment, almost like a guarding behavior. The functional implications of these preliminary findings need to be tested in future research. —Anat V. Lubetzky, PT, Ph.D.

This figure from PLOS ONE shows the postural sway front to back when people are standing on the floor (red) or foam (green). Learn more at hhf.org/blogs/balance-control-in-people-with-hearingor-vestibular-loss-in-one-ear.

A 2019 ERG scientist, Anat V. Lubetzky, PT, Ph.D., is an associate professor at New York University’s department of physical therapy.

Neural tests to assess if important speech elements are heard could be equally useful in children and adults.

How Can We Measure Hearing Aid Success in the Youngest Patients?

In infants and children too young to participate in hearing tests, using neural responses to sound is a reliable way to assess how well they hear. Neural responses to sound are routinely used in audiology clinics for diagnosis to infer the degree and type of hearing loss in very young babies and in older children who may have additional developmental challenges.

However, the use of neural responses to sound to infer how well hearing aids—a common first form of intervention—provide access to speech is less well established. Such tests with hearing aids require the use of speech so hearing aids function the way they would during everyday conversations.

We have been working toward a clinical test that plays the word “susashee” and records neural responses tagged to each sound. The sounds are chosen and modified such that neural responses can help us infer whether low, mid, and high frequencies are accessible to the listener with or without hearing aids, and the extent to which hearing aids make each of these frequencies accessible to the listener.

Hearing all these frequencies are critical for speech understanding and speech and language development during early years of life during which rapid brain development occurs. Our previous work has shown that such a neural test could be useful to assess access to speech in adults with hearing loss using hearing aids, and can be measured in young infants in their sleep.

In our recent study funded by Hearing Health Foundation and published in the Journal of Speech, Language, and Hearing Research in October 2022, we assessed whether neural responses could predict audibility and inaudibility of low, mid, and high frequency speech played at soft to loud levels in children ages 5 to 17 as accurately as in adults, and whether analysis of neural responses using different statistical metrics can influence its accuracy.

Our results demonstrate that neural tests are equally accurate in children and adults, and the type of analysis did not influence accuracy in children. Accuracy of predictions increased at high frequencies, but this was similar in both adults and children.

Further, in a parallel study published in the Journal of Association for Research in Otolaryngology in August 2022, we were able to confirm that the same analysis features pertaining to neural response delays could be used in children and adults without substantial impact on estimation of such responses. We are currently evaluating the accuracy of this neural test in children with hearing loss with and without hearing aids as well as advancing our analysis strategies to improve the accuracy of predictions for clinical use. —Viji Easwar, Ph.D.

A 2019 ERG scientist funded by the Children’s Hearing Institute, Viji Easwar, Ph.D., is the lead researcher of the pediatric hearing research program at the National Acoustic Laboratories in Sydney, Australia.

For references, see hhf.org/winter2023-references.

Support our research: hhf.org/donate.

This article is from: