One of the most important measures in the fight against the coronavirus pandemic is to wear a face mask, be it cloth, surgical or FFP2. On the downside, these masks, which cover the lower half of the face, are also making interpersonal communication more difficult. The spoken word needs to penetrate through the layers of fabric, and facial expressions no longer provide a context. Most people are able to compensate for these restrictions through clear articulation and focused listening. The disadvantage is significantly higher, however, for people with a hearing impairment, since they now have to manage without the additional information provided by lip movements. Especially in challenging communication situations that involve several speakers they have a very hard time grasping what is being said.
Nathan Weisz and Anne Hauswald from the Centre for Neurocognitive Research at the University of Salzburg are exploring this very topical issue in the project “The Impact of Face Masks on Speech Comprehension”, which has been awarded a grant under the “Urgent Funding SARS-CoV-2” track of the Austrian Science Fund FWF. “Hearing-impaired people have to make a much greater effort when communicating with mask wearers to compensate for the lack of visual information,” Weisz notes. Together with Hauswald and fellow researchers, he wants to find out how the brain’s processing of the auditory signal changes on account of missing out on the visual information. On the basis of high-resolution data of the weak – yet measurable – magnetic fields that are generated during signal processing in the brain, the researchers are trying to draw conclusions about the incoming stimuli.
New perception with cochlear implant
The project builds on a field of research that Weisz and Hauswald are also exploring in greater detail in another FWF-funded project that was already launched in 2018. In this project, one of the issues they investigate concerns how hearing-impaired people with a so-called cochlear implant can relearn their auditory perception skills. The implant bypasses the dysfunctional inner ear and transmits the signals directly to the auditory nerve. The hearing sensation created in this way is fundamentally different, however, from that created by a healthy ear. Some patients have problems adapting to these new conditions, while others experience very fast rehabilitation and can then go back and pursue their daily lives without any problems.
In this context, too, the central question is how visual input supports hearing in a communication situation. Does the processing of visual information assist people in learning to interpret the signals from the implant more efficiently? While it is obvious that what people see directs their attention and focuses their listening, the sensory teamwork goes much further than that. “The lip movements observed by lip readers are translated into an acoustic representation in the brain,” explains Nathan Weisz, who calls this a “visuo-phonological transformation process”.
This research builds on studies that Anne Hauswald already conducted in her previous position as a postdoctoral researcher at the University of Trento. “I was able to demonstrate that brain activity in the visual cortex, which organises the human sense of sight, also reacts to acoustic features such as changes in volume,” Hauswald summarises. “We are now wondering to what extent such processes are relevant especially in challenging listening situations.” The researchers are using various experimental set-ups to analyse minute changes in the magnetic fields in the brain by means of magneto-encephalography (MEG) – a very sensitive measuring system for recording brain signals that is closely related to brain wave analysis via electroencephalography (EEG).
Looking at signal processing in the brain
In their experiments, the researchers work with test persons who suffered hearing damage before or after their language acquisition. This involves, for instance, showing them videos of people speaking but without the sound. The brain signals recorded during watching are then compared with the audio track of the video – which the test persons cannot hear – in order to understand better the way visual signal processing in the brain contributes to acoustic comprehension. The researchers also conduct measurement while videos are played backwards for the purpose of having a comparison with input that does not make communicative sense. “Ultimately we expect to find out whether the brain also follows such acoustic properties as volume or pitch changes in this setting, although these are only communicated visually. It will also be interesting to see whether the tracking of these properties is more pronounced in videos played normally than in videos played backwards,” Hauswald explains.
Hauswald and Weisz intend eventually to relate the results of this and similar experiments to data from patients with cochlear implants. “We are trying to find out whether there is a relationship between the visuo-phonological transformation processes in the brain and the very diverse ability of patients to adapt their perception to the artificial input from the implant,” Weisz explains. In this way, the researchers could find out more about those properties of the brain that are relevant for rehabilitation after suffering hearing damage.
Nathan Weisz is Professor of Physiological Psychology at the University of Salzburg. He previously studied at the university of Eichstätt and held positions in Konstanz in Germany, the Institut national de la santé et de la recherche médicale in Lyon, France, and the University of Trento in Italy. His research focuses on audio-visual signal processing in the brain, including diseases such as tinnitus or loss of hearing.
Anne Hauswald is a Senior Scientist at the University of Salzburg. After studying psychology at the University of Konstanz in Germany, she held a postdoctoral position at the University of Trento in Italy before coming to Salzburg. Hauswald’s research focuses on how visual signal processing in the brain supports a person’s auditory perception.
Suess, N., Hartmann, T., & Weisz, N.: Differential attention-dependent adjustment of frequency, power and phase in primary sensory and frontoparietal areas, 2020 (preprint)
Hauswald, A., Keitel, A., Chen, Y., Rösch, S., & Weisz, N.: Degradation levels of continuous speech affect neural speech tracking and alpha power differently, in: European Journal of Neuroscience, ejn.14912, 2020
Hauswald, A., Lithari, C., Collignon, O., Leonardelli, E., & Weisz, N.: A visual cortical network for deriving phonological information from intelligible lip movements, in: Current Biology, 28(9), 1453–1459, 2018
Currently no comments for this article.