4 minute read

One-On-One with Paul Fahey, Ph.D. '64

Paul Fahey, Ph.D. ‘64, who has taught at the University for more than 50 years, is retiring. He was also dean of CAS for a time and chair of the department for seven years. If you pry, he might tell you that he was the editor of the literary magazine when he was a student. Mostly, though, he talks — modestly — about his career in teaching and research, which had implications for hearing tests and speech recognition that we rely on in today’s digital world.

What was it like to be a physics major in the early ’60s at Scranton?

College opened up a whole world of academics and new friendships for me. For the first time, I was in an environment where really serious academics was the first priority. I was also super lucky to have the faculty members I did, among whom were Fr. J.J. Quinn for rhetoric, Fr. Ed Powers for mathematics, Fr. Eugene Gallagher for theology, Bernie McGurl for speech.

In advanced work, I had some the ‘greats’: Joseph Harper, Eugene McGinnis and Andrew Plonsky for physics; Edward Bartley and Bernie Johns for math; Matt Fairbanks and Tom Garrett for philosophy.

Paul Fahey in 1964

You got your doctorate at the University of Virginia on a NASA fellowship. How did you get back to Scranton?

In the spring of 1968, I got a call from Gene McGinnis asking if I was finishing up my doctorate and, if so, would I like a job at Scranton. I said “yes and yes.”

How did you like teaching?

My first year of teaching was the most difficult and stressful year of my professional life. My struggle, in the beginning, was with advanced physics courses. My first course was statistical physics at the graduate level. (Yes, we had a master’s degree in physics back in 1968.)

I spent hours preparing for that course and was never happy with the results. Lucky for me, the students were so talented that they learned in spite of me. After the first year, I started to figure it out.

How did you get to researching at AT&T Bell Laboratories (now Nokia Bell Labs), often called the “idea factory,” which had been “tasked with overcoming the dayto-day engineering challenges of building a national communications network” at the time?

When I returned from a sabbatical at Cornell in 1976, the chair of the Department of Communications asked me to create a natural science course for his majors. The course had materials on how neurons work, how the eye works, how the ear works and how speech is encoded on an air stream by the vocal tract.

For the vocal tract, I used a book written by the head of the Acoustics Research Department at AT&T Bell Laboratories, James Flanagan. I asked Flanagan if he could give me a home so that I could learn more. He agreed, and I got a National Science Foundation fellowship and joined Bell Labs in 1982.

How has your research affected the wider world?

At Bell Labs, I worked with Jont Allen (University of Illinois) on the biophysics of hearing (from the middle ear to the inner ear). Our strong collaboration continued for about 30 years.

The practical aspect of this work is that there is instrumentation now for measuring infant hearing. You can insert a small probe into the ear canal and you can see if the middle ear is obstructed and, if it’s not, find out if the inner ear is a healthy nonlinear organ; nonlinear signals are generated by the inner ear. If the inner ear is linear, it is not working properly, and the child is going to need treatment. If it is nonlinear, the inner ear is mechanically good.

There are several companies that make hearing test devices, including one started by my colleague Jont Allen and his wife. I did a fair amount of work on understanding what the physics has to be. You get these signals out, and then you have to work backward and say, “What kind of physics produces this?” Trying to understand that is what I did and do.

Why was Bell Labs interested in this?

At this time, they were trying to make speech recognizers, because if you can make a speech recognizer then you can send the information needed to re-create that speech with fewer symbols over the telephone line. Then you can fit more calls on the same telephone line. So, the idea was to recognize speech by starting the process with an electronic model of the inner ear.

In fact, these models are actually used today if you post something on YouTube. Say you post a video and you have a song playing in the background; it’s going to be detected, and you’re going to have to pay royalties for the use. All this started with speech recognizers.

With improvements in pattern recognition due to increased computing power, the recognizers are really good. Just ask Alexa or Siri.

What lies ahead in retirement?

The University of Scranton has been my community for about 55 of my 77 years. I think that we have something specialat the University, and I plan to continue my membership part time. I will also stay connected with the worlds of physics and biophysics. Those worlds give me so much pleasure. I will have more discretionary time now; so, we have more flexibility to do other things, especially travel and family activities.

What will you miss the most?

In transitioning to part-time teaching, I will be teaching fewer students. So, over time, when I walk the Commons or the halls, I will know fewer students by name and fewer will be giving me the extra-friendly hello. Seeing friendly faces in the morning has always given me a boost.

This article is from: