My research focus is located on the intersection of language and cognition, always keeping real-world applications in mind.
For my master’s theses I researched linguistic humor as well as audiovisual integration of speech, using methods from computational linguistics and neurolinguistics.
In my PhD, I am investigating multimodal communication in challenging listening conditions.
I am investigating acoustic and linguistic features of speech as well as kinematic features (gestures, head movements, facial expressions, body posture) using video-based motion capture and a data-driven machine learning approach.
For this, I invite people who are neurodivergent or hard-of-hearing to the lab to have a conversation with each other. I am particulary interested in the aspects of multimodal communication that are associated with communicative success and enjoyment and hope to contribute on making public spaces and conversational settings more inclusive from a communicative point of view.