Spring 2025
Spring 2025
Date | Event | Speaker | Abstract/Details |
| 01/08/2025 | Perspectives on Prompting | Denis Peskoff | Abstract: Natural language processing is in a state of flux. I will talk about three recent papers appearing in ACL and EMNLP conferences that are a zeitgeist of the current uncertainty of direction. First, I will talk about a paper that evaluated the responses of large language models to domain questions. Then, I will talk about a paper that used prompting to study the language of the Federal Reserve Board. Last, I will discuss a new paper on identifying generated content in Wikipedia. In addition, I will highlight a mega-paper I was involved in about prompting. Bio: Denis Peskoff just finished a postdoc at Princeton University working with Professor Brandon Stewart. He completed his PhD in computer science at the University of Maryland with Professor Jordan Boyd-Graber and a bachelor's degree at the Georgetown School of Foreign Service. His research has incorporated domain experts-leading board game players, Federal Reserve Board members, doctors, scientists-to solve natural language processing challenges. |
| 01/15/2025 | Planning, introductions, welcome! | Ìý | Ìý |
| 01/22/2025 | Linguistic Society of America Keynote Watchparty | Chris Potts | Ìý |
| 01/23/25 | Alignment Beyond Human Preferences: Use Human Goals to Guide AI towards Complementary AI | Chenhao Tan (CS Colloquium) | Abstract: A lot of recent work has been dedicated to guide pretrained AI with human preferences. In this talk, I argue that human preferences are often insufficient for complementing human intelligence and demonstrate the key role of human goals with two examples. First, hypothesis generation is critical for scientific discoveries. Instead of removing hallucinations, I will leverage data and labels as a guide to lead hallucination towards effective hypotheses. Second, I will use human perception as a guide for developing case-based explanations to support AI-assisted decision making. In both cases, faithfulness is "compromised" for achieving human goals. I will conclude with future directions towards complementary AI. Bio: Chenhao Tan is an associate professor of computer science and data science at the University of Chicago, and is also a visiting scientist at Abridge. He obtained his PhD degree in the Department of Computer Science at Cornell University and bachelor's degrees in computer science and in economics from Tsinghua University. Prior to joining the University of Chicago, he was an assistant professor at the University of Colorado Boulder and a postdoc at the University of 91´ó»ÆÑ¼. His research interests include human-centered AI, natural language processing, and computational social science. His work has been covered by many news media outlets, such as the New York Times and the 91´ó»ÆÑ¼ Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award. |
| 01/29/25 | Similarity through creation and consumption / Collective Memory expression in LLMs | Laurie Jones | Laurie is coming to seek feedback about two projects she's been working on. 1) Identifying the Arab Spring article's ecosystem to inform relationships between articles through the lens of creation and consumption. 2) Investigating the 2 widely used large language models, Chat-GPT and Gemini, regarding collective memory. |
| 02/05/25 | From Relevance to Reasoning - Evaluation Paradigms for Retrieval Augmented Generation | Bhargav Shandilya (Area Exam) | Retrieval Augmented Generation (RAG) has emerged as a cost-effective alternative to fine-tuning Large Language Models (LLMs), enabling models to access external knowledge for improved performance on domain-specific tasks. While RAG architectures are well-studied, developing robust evaluation frameworks remains challenging due to the complexity of assessing both retrieval and generation components. This survey examines the evolution of RAG evaluation methods, from early metrics like KILT scores to sophisticated frameworks such as RAGAS and ARES, which assess multiple dimensions including context relevance, answer faithfulness, and information integration. Through the lens of documentary linguistics, this survey analyzes how these evaluation paradigms can be adapted for low-resource language applications, where challenges like noisy data and inconsistent document structures necessitate specialized evaluation approaches. By synthesizing insights from foundational studies, this study provides a systematic analysis of evaluation strategies and their implications for developing more robust, adaptable RAG systems across diverse linguistic contexts. |
| 02/12/25 | Extracting Automata from Modern Neural Networks | Michael Ginn (Area Exam) | It may be desirable to extract an approximation of a trained neural network as a finite-state automaton, for reasons including interpretability, efficiency, and predictability. Early research on recurrent neural networks (RNNs) proposed methods to convert trained RNNs into finite- state automata by quantizing the continuous hidden state space of the RNN into a discrete state space. However, these methods depend on the assumption of a rough equivalence between these state spaces, which is less straightforward for modern recurrent networks and transformers. In this survey, we review methods for automaton extraction, specifically highlighting the challenges and proposed methods for extraction with modern neural networks. |
| 02/19/25 | AI and NLP in Education: Research, Implementation, and Lessons from Industry | Amy Burkhardt (Cambium Assessment) | This talk will provide a behind-the-scenes look at conducting research on AI in education within an industry setting. First, I'll offer a broader context of working on a machine learning team, highlighting the diverse skill sets and projects involved. Then, through a case study of a NLP-based writing feedback tool, I'll walk through how we built and evaluated the tool, sharing key lessons learned from its implementation. |
| 03/05/25 | Multi-Dialectical NLP Tools for Quechua | Benet Post, Yanjuan Gao (CU Anschutz) | Benet: This preliminary study introduces a multi- dialectical NLP approach for Quechua dialects that combines neural architectures with symbolic linguistic knowledge, specifically leveraging lexical markers and polypersonal verbal agreement to tackle low-resource and morphologically complex data. By embedding rule-based morphological cues into a transformer-based classifier, this work significantly outperforms purely data-driven or statistical baselines. In addition to boosting classification accuracy across more than twenty Quechuan varieties, the method exposes previously undocumented linguistic phenomena in respect to polypersonal verbal agreement phenomena. The findings highlight how neurosymbolic models can advance both language technology and linguistic research by respecting the dialectal diversity within an under-resourced language family, ultimately raising the bar for dialect-sensitive NLP tools designed to empower speakers of these languages digitally. CU Anschutz: Recent advances in LLMs have shown potential in clinical text summarization, but their ability to handle long patient trajectories with multi-modal data spread across time remains underexplored. This study systematically evaluates several state-of-the-art open-source LLMs and their Retrieval Augmented Generation (RAG) variants on long-context clinical summarization. We examine their ability to synthesize structured and unstructured Electronic Health Records (EHR) data while reasoning over temporal coherence, by re-engineering existing tasks, including discharge summarization and diagnosis prediction from two publicly available EHR datasets. Our results indicate that long context window improves input integration, but do not consistently enhance clinical reasoning, and LLMs are still struggling with temporal progression and rare disease prediction. While RAG show improvements in hallucination in some cases, it does not fully address these limitations. |
| 03/12/25 | CLASIC Industry Day | Ìý | Ìý |
| 03/19/25 | Assessing progress in Natural Language Inference in the age of Neural Networks | Dananjay Srinivas (Area Exam) | Over the last decade, the space of natural language inference (NLI) has seen a lot of progress, primarily through novel constructions of inference tasks that benefit from neural approaches. This has led to claims of neural models' abilities to understand and reason over natural language. Simultaneously, subsequent works also empirically find limitations with NLI methods and tasks, challenging previous claims of neural networks' ability to operate on logical semantics. In this talk, we synthesize NLI task formulations and relevant empirical findings from prior scholarship to qualitative assess the soundness and limitations of neural approaches to NLI. We find from our synthesis, that though neural approaches to NLI is a well explored space, certain foundational questions still remain unanswered, affecting the fidelity of neural inference. We share key findings for future research on NLI, as well as discuss ideas on how we believe the space of NLI should be transformed in order to build language technology that can robustly operate on logical semantics. |
| 04/02/25 | Generalizing Low-Resource Morphology: Cognitive and Neural Perspectives on Inflection | Adam Wiemerslage (Defense) | State of the art NLP methods to leverage enormous amounts of digital text are transforming the experience of working with computers and accessing the internet for many people. However, for most of the world's languages, there is insufficient digital data to make recently popular technology like large language models (LLMs) possible. New technology like LLMs are typically not well- suited for underrepresented languages- often referred to as low-resource languages in NLP-without sufficient digital data. In this case, simpler language technologies like dictionaries, morphological analyzers, and text normalizers are useful. This is especially apparent for language documentary life- cycles, building educational tools, and the development of language typology databases. With this in mind, we propose techniques for automatically expanding coverage of morphological databases and develop methods for building morphological tools for the large set of languages with few available resources. We then study the generation capabilities of neural network models that learn from these resources. Finally, we propose methods for training neural networks when only small amounts of data are available, taking inspiration from the recent successes of self-supervised pretraining in high-resource NLP. |
| 04/09/25 | Meditations on the Available Resources for Low-Resource NMT | Ali Marashian (Area Exam) | In spite of the progress in NMT in the last decade, most languages in the world do not have sufficient digitized data to train neural models on. Different approaches to remedy theproblems of low-resource languages utilize different resources.In this presentation, we will look into the available categories of resources through the lens of practicality: parallel data, monolingual data, pretrained multilingual models, grammar books and morphological information, and automatic evaluation metrics. We conclude by highlighting the importance of more focus on data collection as well as on the interpretability of some of the available tools. |
| 04/16/25 | The Meaning of Agency and Patiency to Machines and People | Elizabeth Spaulding (Defense) | This thesis establishes the capabilities and limitations of various language modeling technologies on the task of semantic proto-role labeling (SPRL), which assigns relational properties such as volition, awareness, and change of state to event participants in sentences. First, we demonstrate the feasibility and best practices of SPRL learned and inferred jointly with other information extraction tasks. We also show that language model output categorizes entities in sentences consistently across verb-invariant and verb-specific linguistic theories of agency, adding to the growing body of evidence of language model event reasoning capabilities. Further, we introduce a method for adopting semantic proto-role labeling systems and proto-role theory as a tool for analyzing events and participants by using it to quantify implicit human perceptions of agency and experience in text. We discuss the implications of our findings as a whole and identify multiple paths for future work, including deeper annotator involvement in future annotation of SPRL, SPRL analysis on machine-generated text, and cross-lingual studies of SPRL. Pursuing these future directions could improve both the theoretical frameworks and the computational methods, and help uncover how both people and machines structure and process events. |
| 04/23/25 | Bringing Everyone In: The Future of Collaboration with Conversational AI | Maggie Perkoff (Defense) | Collaborative learning enables students to build rapport with their peers while building upon their own knowledge. Teachers can weave collaborative learning opportunities into the classroom by having students work together in small groups. However, these collaborations can break down when students are confused with the material, one person dominates the conversation, or when some of the participants struggle to connect with their peers. Unfortunately, a single teacher cannot attend to the needs of all groups at the same time. In these cases, pedagogical conversational agents (PCAs) have the potential to support teachers and students alike by taking on a collaboration facilitator role. These agents engage students in productive dialog by providing appropriate interventions in a learning setting. With the rapid improvement of large language models (LLMs), these agents can easily be backed by a generative model that can adapt to new domains and variations in communication styles. Integrating LLMs into PCAs requires understanding the desired teacher behavior in different scenarios and constraining the outputs of the model to match them. This dissertation explores how to design, develop, and evaluate PCAs that incorporate LLMs to support students collaborating in small groups. One of the products of this research is the Jigsaw Interactive Agent (JIA), a multi-modal PCA that provides real-time support to students via a chat interface. In this work, we describe the multi-modal system to analyze students' discourse that JIA relies on, test different methods for constraining the JIA outputs in a lab setting, and evaluate the use a retrieval-augmented generation approach to enhance the outputs with curriculum materials. Furthermore, we propose a framework for expanding JIA's capabilities to support neurodivergent students. Ultimately, this dissertation aims to align advancements in LLM-based conversational agents with the perspectives and expertise of the teachers and students who can greatly benefit from their usage. |