I have always been interested in the intersection of language and cognition, and am especially interested in the applications of research in the field.
For my master’s theses I researched linguistic humor as well as audiovisual integration of speech, using methods from computational linguistics and fMRI.
In my PhD, I am researching multimodal communication in challenging listening conditions. I am investigating acoustic and linguistic features of speech as well as kinematic features of gestures, head movements, facial expressions, and body posture using a data-driven machine learning approach. My target group are individuals who are hard-of-hearing and in lab experiments, they will engage in free- and task-based naturalistic dialogue.
With this research, we hope to provide insights into how to make communication easier and more successful for hard-of-hearing individuals, especially in noisy contexts.