Jelle Zuidema
University of Amsterdam
Large Language Models
Lecture 1: Opening the Black Box of Large Language Models
Large Language Models, and Foundation Models more generally, have become extremely useful tools in a diverse set of applications, ranging from automatic translation on hotel booking sites and writing assistance in the financial services, to automatic summarization of legal documents and automatized customer service in retail. However, these models remain largely black boxes: it often difficult or even impossible to explain how they arrive at their predictions. This is problematic in high-stake applications, where there are moral, legal or commercial reasons to provide end users with (approximate) explanations. In this lecture, I will dive into “The Black Box Problem”, discuss the why providing explanations is difficult, and why returning to older “explainable-by-design” frameworks is often not an option. I will sketch the main approaches to instead do “posthoc interpretability”: opening the black box of LLMs. I will show some recent successes in this domain from my own and other research groups.
Lecture 2: Correcting factual information and factual reasoning in Large Language Models.
Large Language Models, and other Generative AI models, are powerful, but certainly not error-free. They have learned from datasets with errors, with biases and other undesirable content, and they often make mistakes when generalizing to novel factual statements. One consequence of their black box nature is that it is often difficult to correct these factual errors and undesirable behavior. In this lecture, I will discuss recent advances, from my own and other research groups, in mitigating biases, in localizing and adapting factual information in these models, and in training them on complex reasoning tasks. I will go into “The Hallucination Problem”, and assess what the currently best available strategies for dealing with it are.
Jelle Zuidema is Associate Professor of Natural Language Processing, Explainable AI and Cognitive Modelling at the University of Amsterdam. He holds an MSc in Artificial Intelligence from Utrecht University and a PhD from the University of Edinburgh, and previously also worked at the Sony Computer Science Lab and at Leiden University. His research group is focused on interpreting deep learning models, mostly in NLP, but also with applications to logic, speech, vision and cognitive neuroscience. He leads the National Research Agenda project “InDeep: Interpreting deep learning models for text and sound”.