I am a PhD student at the Computational Linguistics Group of the University of Groningen and member of the InDeep consortium, currently working on user-centric interpretability of large language models. My supervisors are Arianna Bisazza, Malvina Nissim and Grzegorz Chrupała.
Previously, I was a research intern at Amazon Translate NYC, a research scientist at Aindo, a Data Science MSc student at the University of Trieste and a co-founder of the AI Student Society.
My research focuses on bridging the gap between advances in the field of interpretability for generative language models and the downstream benefits to model users, with a particular emphasis on understanding how contextual information is integrated into predictions to improve model trustworthiness. I am also very interested in parallels between human and artificial learning and reasoning, with a particular taste for working with human behavioral signals.
I am the main developer of the Inseq library for LLM interpretability, and I am generally very excited about open-source projects making interpretability tools and techniques more accessible to the broader AI community.