26 minute read
Research Recent Research by Hearing Health Foundation Scientists, Explained.
Recent Research by Hearing Health Foundation Scientists, Explained
Common Drugs Could Protect Against Noise-Induced Hearing Loss
Advertisement
A growing number of people are suffering from hearing loss due to exposure to loud noises from heavy machinery, concerts, or explosions. As a result, scientists have been working to understand the mechanism through which damage to hearing actually occurs.
Now, a team led by researchers at the University of Maryland School of Medicine (UMSOM) has published an online interactive atlas representing the changes in the levels of RNA made in the different cell types of ears of mice after damage due to loud noise. These changes in RNA levels are known as changes in “gene expression.”
Once they determined the larger trends in gene expression following the damage, the UMSOM scientists then searched a database of FDA-approved drugs to find those that are known to produce opposite patterns of those caused by the noise. From this analysis, the research teams identified a handful of drug candidates that may be able to prevent or treat the damage, and ultimately preserve hearing. The researchers’ analysis was published in Cell Reports in September 2021.
“As an otolaryngologist surgeon-scientist, I see patients with hearing loss due to age or noise damage, and I want to be able to help prevent or even reverse the damage to their hearing,” says study leader Ronna Hertzano, M.D., Ph.D., a professor of otorhinolaryngology–head and neck surgery, anatomy, and neurobiology at UMSOM and an affiliate member of UMSOM’s Institute for Genome Sciences. “Our extended analysis gives us very specific avenues to follow up on in future studies, as well as provides an encyclopedia that other researchers can use as a resource to study hearing loss.”
The team added their newest data on noise-induced hearing loss to the gEAR—the Gene Expression Analysis Resource—a tool developed by Herzano’s laboratory that allows researchers not trained in informatics to browse gene expression data.
Hertzano explains that the inner ear resembles the shell of a snail, with separate fluid compartments and sensory cells along its entire length. The ear functions like a battery with a gradient of ions between the fluid compartments that is generated by the side wall of the shell by adding in potassium. The sensory cells detect sound and then communicate with the neurons that interact with the brain to interpret the signal. The sensory cells are surrounded by support cells. The inner ear also has resident immune cells to protect it from infection.
Research supervisor Beatrice Milon, Ph.D., in Hertzano’s laboratory initially did an analysis on the sensory cells and the support cells of the ear in mice. She collected data on the changes in gene expression from before and after noise damage. After making their study known to other researchers in their field, the team heard from scientists at Decibel Therapeutics (led by Joe Burns, Ph.D.) and the Karolinska Institute (led by Barbara Canlon, Ph.D.), who had the gene expression data from the inner ear’s neurons, side wall, and immune cells from before and after noise damage. The teams then combined the datasets and performed their analyses.
The bioinformatic analyses were led by Eldad Shulman from the lab of Ran Elkon, Ph.D., Tel Aviv University, a bioinformatics expert who has been working collaboratively with Hertzano now for over two decades. Together, they leverage advanced computational techniques and combine them with biological insights to analyze and interpret data, providing impactful insights to the hearing research field.
Hertzano says it was so important that they looked at a cell-specific level, rather than looking at the entire ear, because they found that most of the gene expression changes were specific to only one or two cell types.
“We expected the subset of neurons typically sensitive to noise and aging to have ‘bad’ changes in genes, so that we could counter them with drugs, but there was no such thing,” Hertzano says. “On the contrary, we found that the subset of neurons that are resistant to noise trauma turn on a program that protects them while the very sensitive neurons had little change in gene expression. We are currently looking into approaches to induce the protective changes in the noise-sensitive neurons to prevent their loss from noise and aging.”
In another example, the researchers found that only one out of the four types of immune cells detected showed major differences in gene expression.
Additionally, immune-related genes were turned up in all cell types of the inner ear after noise damage, with many of them controlled by two key regulators.
The research team took the overall gene expression trends and plugged them into DrugCentral, a database of known molecular responses to FDA-approved drugs, specifically searching for changes that would be opposite of those happening in the noise-damaged cells. They identified the diabetes drug metformin as a potential candidate, as well as some inhaled anesthetic medications used in surgeries and other medications.
“Hearing aids and cochlear implants are used to alleviate hearing loss; however, there are no therapies available to prevent or treat hearing loss,” says E. Albert Reece, M.D., Ph.D., MBA, the executive vice president for medical affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor, and Dean, UMSOM. “The studies that follow up on these findings may eventually lead to medications to prevent occupational noiseinduced hearing loss, for example in factory workers, and to changes in standardizing anesthesia protocols for ear surgery, particularly in hearing preservation procedures.” —University of Maryland School of Medicine
This originally appeared on the University of Maryland School of Medicine website, at medschool.umaryland.edu/news. HHF’s Hearing Restoration Project (HRP) member Ronna Hertzano, M.D., Ph.D., is a 2009–10 Emerging Research Grants (ERG) scientist. Hertzano and HRP scientific director Lisa Goodrich, Ph.D., recently hosted a webinar on hair cell regeneration; please see hhf.org/webinar to view the captioned recording.
Postural and Head Control Given Different Environmental Contexts
Researchers used two visual reality contexts to implement gradual changes in visual and auditory input: abstract stars (below, left) and a city scene.
Balance is known to be specific and context-dependent. Context is defined as the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed.
The majority of studies reporting that balance is context-dependent refer to the task: standing steady (static balance) or moving (dynamic balance); performing a balance task alone or with an additional cognitive task such as memory or calculation; standing on a stable or an unstable surface, etc. For example, improving one’s singleleg stance time is not expected to transfer to faster or more stable walking.
Current virtual reality technology allows context-based testing of multisensory integration and balance using head mounted displays (HMDs). By creating diverse environments that provide different contexts (e.g., a street vs. the clinician’s office) we can be better positioned to assess changes in balance performance potentially induced by cognitive and emotional aspects, such as postural threats, fear of imbalance, or symptoms related to past experiences within specific environments.
In our study published in Frontiers in Neurology in June 2021, we investigate how postural sway and head kinematics change in healthy adults in response to different levels of combined auditory and visual perturbations (changes): static visuals without sounds, and then two levels of moving visuals and dynamic sounds of varying intensity. These gradual perturbations were created in two different contexts, an abstract stars scene or a city scene.
Our first finding was that the current settings were too subtle to test differences between responses to low and high visuals and sounds. We therefore combined the low and high data and compared the ‘static visual, no sound’ vs. dynamic scenes. That prevented us from being able to isolate the role of sounds in this specific protocol. Future studies should isolate each modality in the presence of the other.
Our second finding was that for most measured parameters (side-to-side postural sway and head movement, pitch and roll head movement), an increase in movement between the static and dynamic scenes was greater in the city than the stars scene. These findings support the importance of context in the study of sensory integration and confirm the feasibility of an HMD setup to evaluate balance in different contexts.
We also explored the feasibility of this novel HMD assessment in individuals with unilateral peripheral vestibular hypofunction and monaural hearing (hearing on one side only). The majority of the vestibular group moved more than controls when the scenes were dynamic, particularly in the city scene. The monaural hearing group was more diverse, with slightly more than half performing similarly to controls. Those who performed differently had either prior vestibular rehabilitation or reduced hearing compared with the others.
Fall risk in people with hearing loss has been shown in older adults, and our pilot data suggests balance impairments in people with single-sided hearing are more likely to arise in older participants with moderate dizziness. Future studies utilizing HMDs should further assess the impact of aging with and without hearing loss on postural performance in a larger sample with a specific diagnosis of the hearing loss type as well as control for vestibular symptoms in those with hearing loss. —Anat V. Lubetzky, Ph.D.
A 2019 ERG scientist, Anat V. Lubetzky, Ph.D., is an assistant professor in the physical therapy department at New York University. For more about Lubetzky’s research, see page 18.
Cochlear Organoids Reveal HIC1’s Role in Hair Cell Differentiation
Sensory hair cells in the cochlea translate sound into electrical signals that they transmit to auditory neurons. These cells are lost over the course of life through cumulative damage from aging, infection, certain medications, and loud sounds. Sensory hair cells do not grow back, and consequently hearing loss is irreversible.
In contrast, non-mammals, like birds and fish, can regenerate their damaged hair cells. Recent studies have also found spontaneous regeneration of hair cells after damage in the newborn mouse cochlea and balance organ. This regeneration involves a series of molecular pathways whereby precursor progenitor/stem cells (which are a subset of supporting cells that reside adjacent to the hair cells) begin to express the key hair cell transcription factor ATOH1 and differentiate into hair cells. However, this capacity for regeneration disappears during maturation of the mammalian cochlea, suggesting that regeneration is possible but repressed after development.
Understanding this molecular blockade hindering progenitor cell to hair cell conversion promises to reveal key steps needed to reverse hearing loss. Studying these pathways has traditionally been limited by complex transgenic mice and a few sets of relevant tools in a dish. The advent of inner ear organoids, which yield an expanded pool of progenitor cells, revolutionized our ability to study this process robustly in a dish. We derive organoids by first isolating and expanding progenitor cells from newborn mice cochleae and then differentiating them to hair cells. We can genetically modify the organoids to study how specific proteins affect this process.
We focused on the HIC1 (hypermethylated in cancer 1) protein, as its role regulating ATOH1 has been established in other systems. It has been previously shown to directly bind to and suppress the regulatory regions around the Atoh1 gene during cerebellar development, and Hic1 deletion appears to permit Atoh1 expression and differentiation of Paneth cells in the intestine (like hair cells, Paneth cells express Atoh1). We hypothesized that HIC1 could contribute to repression of the Atoh1 gene in the cochlea through transcriptional regulation and interaction with Wnt, a key signaling pathway important in cochlear development.
As we reported in Stem Cell Reports in April 2021, we found that across various time points (ages), Hic1 is expressed throughout the mouse sensory epithelium. In cochlear organoids, HIC1 knockdown (suppression) induces Atoh1 expression and promotes hair cell differentiation, while HIC1 overexpression hinders differentiation. We go on to study HIC1’s interaction with Wnt signaling which appears to be critical to its mechanism. Our findings reveal the importance of HIC1 repression of Atoh1 in the cochlea. It also demonstrates the power of combining the organoid model with the genetic toolkit to study key regulators of hair cell differentiation, which we hope will be leveraged to advance our understanding of hair cell development and regeneration. —Dunia Abdul-Aziz, M.D.
A 2019 ERG scientist, Dunia Abdul-Aziz, M.D., is an otolaryngologist at Mass Eye and Ear and an instructor in otolaryngology–head and neck surgery at Harvard Medical School.
This is a representative brightfield image of inner ear organoids. Within the organoids, a much larger proportion of cells in which Hic1 is knocked down (short-hairpin RNA to Hic1, or shHic1), marked by a red reporter) demonstrate overlapping expression of Atoh1 (green) compared with untreated organoids or organoids that are treated with nontargeting shRNA. This study further shows that these cells express other hair cell markers consistent with their differentiation into hair cells.
Novel Small Molecule Promotes Synaptic Regeneration In Vitro
The small molecule 1Aa promotes neurite outgrowth in vitro. Below, neurites were stained with neuronal marker TuJ (red), and nuclei were labeled with DAPI (blue). The scale bar represents 100 μm. Images are representative of four independent experiments.
Hearing loss is associated not only with the loss of sensory hair cells in the inner ear, but also with the loss of nerve cells called spiral ganglion neurons (SGNs), as well as the synapses between surviving hair cells and SGNs. Improving SGN survival, neural outgrowth, and synaptogenesis (synapse formation) could lead to significant gains for patients with hearing loss.
Neurotrophic factors (molecules that regulate the growth and survival potential of neurons) might promote the survival of SGNs and the rewiring of sensory hair cells by surviving SGNs. In a paper published in Frontiers in Cellular Neuroscience in July 2021, my team and I detail how we have pioneered a hybrid molecule approach to maximize SGN stimulation through the use of small molecule analogues of neurotrophin-3 (NT-3). NT-3, in addition to brain-derived neurotrophic factor (BDNF), is the primary neurotrophin in the inner ear during development and throughout adulthood. Both have demonstrated potential for SGN survival and neurite outgrowth.
We have previously shown that a small molecule BDNF analogue can promote SGN neurite outgrowth and synaptogenesis in vitro. There is evidence that NT-3 will have a greater regenerative capacity in the cochlea than BDNF in a number of contexts. We therefore sought to develop a similar approach for NT-3 using 1Aa, a small molecule analogue of NT-3. To maximize the potential for drug delivery to the bone-encased cochlea, we also studied the activity of a bone-binding derivative of NT-3. This hybrid molecule links 1Aa to risedronate, which is a clinically used bisphosphonate molecule that avidly binds bone for the treatment of osteoporosis, to create Ris-1Aa.
Using an in vitro mouse model, we demonstrate that both 1Aa and Ris-1Aa stimulate neurite outgrowth and synaptogenesis in SGN cultures at a significantly higher level compared with controls. This result provides the first evidence that a small molecule analogue of NT-3 can stimulate SGNs and promote regeneration of synapses between SGNs and inner hair cells.
This work furthers the development of an effective drug delivery platform for the inner ear that uses the cochlear bone as a depot for prolonged neurotrophic stimulation of SGNs. Our method may bypass the pitfalls of systemic administration—increased risk of side effects and insufficient levels of drug delivery—and the dangers related to opening the cochlea.
As we have now described novel small conjugated molecules with neurotrophic activity in vitro, we anticipate that other small molecules with desired activities within the cochlea could potentially be delivered via this platform. —David Jung, M.D., Ph.D.
A 2018 ERG scientist, David Jung, M.D., Ph.D., is an otolaryngologist at Mass Eye and Ear and an assistant professor in otolaryngology–head and neck surgery at Harvard Medical School. Coauthor Judith Kempfle, M.D., is a research fellow in otolaryngology–head and neck surgery at Mass Eye and Ear and a 2010–2011 ERG awardee. Another coauthor, Albert Edge, Ph.D., is the Eaton-Peabody Professor of Otolaryngology–Head and Neck Surgery at Mass Eye and Ear and a member of the Hearing Restoration Project consortium.
Verifying a Novel Method for Assessing Speech Motor Skills in Children With Cochlear Implants
Auditory input is essential to the acquisition and maintenance of speech production skills, and yet speech production has been scarcely studied in children and adults with impoverished auditory input since cochlear implants (CIs) became a standard treatment. A critical piece of the puzzle in understanding the development and maintenance of speech motor control in CI users is that we need to better understand how these individuals time, sequence, and coordinate speech movements. During conversational speech, typical hearing individuals usually produce six to nine syllables per second, using approximately 100 different muscles in their laryngeal and supralaryngeal vocal tracts.
Recent technological advances have greatly improved researchers’ ability to track the motions of the speech articulators (tongue tip, tongue body, lips, and jaw). Our University of Florida Cognition, Action, and Perception of Speech Lab allows the use of electromagnetic articulography (EMA) to directly record speech movements within the inner reaches of the vocal tract on a millisecond-by-millisecond basis, and localize where in the vocal tract the movements occur. The tracking is performed by using magnetic fields to localize the positions of sensors temporarily attached to the articulators during speaking.
However, the use of EMA in children and adults with CIs had been curtailed because it was not clear that direct measures of speech motor actions could be made without negative interactions between CIs and EMA, since both devices make use of magnetic fields. In our current study, published in the Journal of the Acoustical Society of America Express Letters in August 2021, my team and I demonstrated for the first time that there is minimal crossinterference between the devices, suggesting that EMA is a promising method for assessing speech motor skills in children with CIs. Our team is now working to establish optimal methods for collecting articulographic data from CI users.
Collectively, the current findings lay the foundation for future research aimed at developing novel, mechanistically driven rehabilitation protocols to optimize speech motor instruction for deaf children. By combining principles and tools from engineering and computer science with cognitive and linguistic science, we envision developing robotic devices to deliver speechlike patterns of somatosensory input to the vocal tracts of children who use CIs as they learn to listen to speech sounds through their CI processor. —Matthew Masapollo, Ph.D.
Matthew Masapollo, Ph.D., affixes articulography sensors onto the lips, tongue, and jaw for speech movement tracking of a cochlear implant (CI) user. Three transmitter coils, located at the vertices of an equilateral triangle above the head of the CI user, generate and radiate out a series of magnetic fields, which track the positions of the sensors in near real-time during a speaking task.
A 2022 ERG recipient, Matthew Masapollo, Ph.D., is the director and principal investigator of the University of Florida’s Laboratory for the Study of Cognition, Action, and Perception of Speech, which was established in 2020.
Common Loud Noises Cause Fluid Buildup in the Mouse Inner Ear—Which May Be Easily Resolved
Exposure to loud noise, such as a firecracker or an ear-splitting concert, is the most common preventable cause of hearing loss. Research suggests that 12 percent or more of the world population is at risk for noise-induced loss of hearing.
Loud sounds can cause a loss of auditory nerve cells in the inner ear, which are the cells responsible for sending acoustic information to the brain, resulting in hearing difficulty. However, the mechanism behind this hearing loss is not fully understood.
Now, a new Frontiers in Cell and Developmental Biology study from Keck Medicine of University of Southern California (USC) links this type of inner ear nerve damage to a condition known as endolymphatic hydrops, a buildup of fluid in the inner ear, showing that these both occur at noise exposure levels people might encounter in their daily life. Additionally, researchers found that treating the resulting fluid buildup with a readily available saline solution lessened nerve damage in the inner ear.
“This research provides clues to better understand how and when noise-induced damage to the ears occurs and suggests new ways to detect and prevent hearing loss,” says John Oghalai, M.D., an otolaryngologist with Keck Medicine, the chair of the USC Caruso Department of Otolaryngology–Head and Neck Surgery, and the lead author of the study. A previous study he conducted on mice exposed to blast pressure waves simulating a bomb explosion linked nerve damage with fluid buildup in the inner ear.
For this study, Oghalai and colleagues wanted to explore the effect of common loud sounds ranging from 80 to 100 decibels (dB) on the mouse inner ear. After the exposure, they used an imaging technique known as an optical coherence tomography to measure the level of inner ear fluid in the cochlea, the hollow, spiral-shaped bone found in the inner ear.
Up until exposure to 95 dB of sound, the inner ear fluid level remained typical. However, researchers discovered that after exposure to 100 dB—which is equivalent to sounds such as a power lawn mower, chainsaw, or motorcycle—the mice developed inner ear fluid buildup within hours. A week after this exposure, the mice were found to have lost auditory nerve cells.
However, when researchers applied hypertonic saline, a salt-based solution used to treat nasal congestion in humans, into the affected mouse ears one hour after the noise exposure, both the immediate fluid buildup and the long-term nerve damage lessened, implying that the hearing loss could be at least partially prevented.
These study results have several important implications, according to Oghalai, especially as the loss of nerve cells in the inner ear is known as “hidden hearing loss” because hearing tests are unable to detect the damage.
“First, if human ears exposed to loud noise, such as a siren or airbag deployment, can be scanned for a level of fluid buildup—and this technology is already being tested out— medical professionals may have a way of diagnosing impending nerve damage,” he says. “Secondly, if the scan discovered fluid buildup, people could be treated with hypertonic saline and possibly save their hearing.”
Oghalai also believes the study opens a new window into understanding Ménière’s
Up until exposure to 95 decibels (dB) of sound, the inner ear fluid level remained typical. However, researchers discovered that after exposure to 100 dB—which is equivalent to sounds such as a power lawn mower, chainsaw, or motorcycle—the mice developed inner ear fluid buildup within hours. A week after this exposure, the mice were found to have lost auditory nerve cells. However, when researchers applied hypertonic saline, a salt-based solution used to treat nasal congestion in humans, into the affected mouse ears one hour after the noise exposure, both the immediate fluid buildup and the long-term nerve damage lessened, implying that the hearing loss could be at least partially prevented.
disease, a disorder of the inner ear that causes vertigo, tinnitus, and hearing loss.
“Previously, inner ear fluid buildup was thought to be primarily linked to Ménière’s disease. This study indicates that people exposed to loud noises experience similar changes,” he says.
Oghalai hopes this study will lead to further research on the reasons ear fluid buildup occurs, and encourage the development of better treatments for Ménière’s disease. —Keck Medicine of University of Southern California
This originally appeared on the Keck Medicine of USC website, at news.keckmedicine.org. John Oghalai is a 1996–1997 ERG scientist. The study was supported by the National Institute on Deafness and Other Communication Disorders.
For links to all the publications in this section, see hhf.org/winter2022-references.
This sponsored page shows current trends in technology.
Tech Solutions
CapTel® Captioned Telephone
Enjoy the phone again with confidence! Ideal for people with hearing loss, the CapTel® Captioned Telephone shows captions of every word the caller says over the phone. You can listen to the caller, and read the written captions in the CapTel® display screen. Only CapTel® gives you several models to choose from, including contemporary touch-screen options and traditional telephone styles. All CapTel® phones include a large display screen, adjustable font sizes and colors, and a builtin answering machine that shows captions of your messages. CapTel® gives you the confidence to reconnect over the phone, knowing you won’t miss a word! Visit CapTel.com.
Never Miss a Word With Olelo Captioned Calls
Powered by advanced speech recognition, Olelo is a next-generation captioned phone service designed for maximum accuracy and privacy in an easy-to-use mobile app. • Captions each spoken word • in real time. Fully automated captioning • protects your privacy. Voicemail, saved transcripts, • Bluetooth, and more. It’s free! Olelo is an FCC-certified service paid for by a federally administered fund. Try Olelo today! Install takes just minutes! Visit getolelo.com.
These featured products are paid advertisements. To advertise in Hearing Health magazine, email hello@glmcommunications.com or call 212.929.1300.
Supporters of Hearing Health
Captioned Telephone
CapTel captioned telephone shows word-for-word captions of everything a caller says over the phone. Like captions on TV—for your phone. Helps people with hearing loss enjoy phone conversations, confi dent they’ll catch every word.
InnoCaption is the only mobile app that offers real-time captioning of phone calls through live stenographers and automated speech recognition software—the choice is yours. Our technology makes phone calls easy and accessible!
The Les Paul Foundation inspires innovative and creative thinking by sharing the legacy of Les Paul through support of music education, recording, innovation, and medical research related to hearing.
captel.com 800.233.9130
pages 2, 48 innocaption.com
page 5 lespaulfoundation.org 212.687.2929
Olelo Captioned Calls is an easyto-use app that produces accurate captioning in less than a second, keeps conversations private by using automated speech recognition, and can be used anywhere service or WiFi is available.
olelophone.com
page 48
Hear and be heard, loud and clear. Panasonic’s amplifi ed cordless phone systems are ideal for everyone affected by hearing loss. Among many features, the Volume Booster amplifi es the call volume up to 50 decibels.
shop.panasonic.com/amplifi ed
page 7
Capture the greater meaning behind words across our new extended lineup. The Discover Next platform boasts ITEs, BTEs, and RICs, including MoxiTM Move R. Enjoy enhanced fl exibility with new remote adjust— an easy way to deliver fi ne-tuning adjustments remotely.
unitron.com/discovernext 800.888.8882
page 51
Hearing Health Foundation’s Emerging Research Grants (ERG) fund innovative approaches toward understanding, preventing, and treating hearing and balance conditions.
hhf.org/erg
Hearing Health Foundation’s Hearing Restoration Project is the fi rst international research consortium investigating how to regenerate inner ear sensory hair cells in humans to eventually restore hearing.
Picture Your Company Logo Here
All Hearing Health advertisers and other partners are featured on this Marketplace page at no additional charge. Please join our community of supporters.
hhf.org/hrp 212.929.1300
Meet the Researcher
Emerging Research Grants (ERG)
As one of the leading funding sources available for innovative research, HHF’s ERG program is critical. Without our support, scientists would not have the needed resources for cutting-edge approaches toward understanding, preventing, and treating hearing and balance disorders.
Megan Beers Wood, Ph.D.
John Hopkins University School of Medicine
Wood received her doctorate in immunology and molecular pathogenesis at Emory University in Atlanta. She is now a postdoctoral research fellow at Johns Hopkins University School of Medicine in the department of otolaryngology. Wood’s 2022 Emerging Research Grant is generously funded by Hyperacusis Research.
i was intrigued to find that neurons in the cochlea expressed genes similar to pain-sensing neurons. Since my background is in immunology, I’m interested in whether the alpha-calcitonin gene-related peptide (CGRPα) in those neurons interacts with immune cells after noise exposure, like in other organs. I looked for CGRPα protein in type II peripheral endings after exposure and saw promising results. My current project allows me to learn more about this exciting observation, which may help with hyperacusis—an elevated sensitivity to everyday sounds.
i was interested in science from a very young age and wanted to be a doctor. My first “research” project was in 5th grade when I looked into how Super Glue could be used instead of stitches for some wounds. I am also lucky to come from a family with several scientists. On one side was my grandfather, who was a forester with a Ph.D. in plant biology, and on the other side I have two cousins who are scientists—one a chemist working for the Smithsonian Institution and the other a scientist working in biotech. So I would say we are all a curious bunch, and that was encouraged at home by nature walks with my parents and science kits to play with.
as a toddler i was diagnosed with juvenile idiopathic arthritis. My experiences in teaching hospitals were very inspiring. One of my pediatric rheumatologists even oversaw my independent research study in high school, which was my first real introduction to scientific literature. my goal as a researcher has been to explain rare phenomena. I recently gave a talk for Hyperacusis Research. It was very rewarding, as I was able to interact with people experiencing the condition. It really helped me understand the mechanisms underlying hyperacusis.
i grew up around music, so the experiential side of hearing has always been important to me. My grandmother wore hearing aids. I saw firsthand how uncomfortable they are and how isolated she became when the batteries got low.
over the summer i worked in a community garden. We grew 70 pounds of cucumbers and 15 varieties of tomatoes! I like to think I’m following in my grandfather’s footsteps as he grew abundant vegetables. I also enjoy embroidery. I find free-handing shapes lets me slow down. It keeps my hands busy while I think over complex problems. —Heather Chambers
Megan Beers Wood, Ph.D., is generously funded by Hyperacusis Research. We thank them for their support of studies that will increase our understanding of the mechanisms, causes, diagnosis, and treatments of hyperacusis and severe forms of loudness intolerance.
We need your help funding the exciting work of hearing and balance scientists. Please consider donating today to Hearing Health Foundation to support groundbreaking research. Visit hhf.org/how-to-help.
6 Ways to Make an Impact Today and Tomorrow
You can make a meaningful difference in hearing loss research. Whichever method below you choose, every gift to Hearing Health Foundation (HHF) counts.
Check or credit card gifts online or by mail are easy and immediate. For more of an impact, schedule a monthly gift that helps sustain research without interruption.
Donating appreciated stock can reduce your tax bill. You receive a charitable tax deduction for the full value of the stock, and avoid paying taxes on the stock as it appreciates.
A charitable bequest in your will can be a more substantial gift if you are unable to donate today. If you do not have a will, create one for free at freewill.com/hhf. The De Francescos named HHF in their estate plans.
If you are in possession of life insurance policies that you no longer need, you can designate HHF as the beneficiary.
IRA distributions that begin when you turn 70 1/2 can be taxed as income, but if you choose to donate them to HHF, you avoid the penalty.
Retirement plan benefits left to heirs are more highly taxed than other assets. Make a meaningful gift to HHF instead, leaving lower-taxed assets to loved ones.
This publication is distributed for free through the generous support of our community. To learn more, visit hhf.org/how-to-help, email plannedgiving@hhf.org, or call 212.257.6140.