Trustworthy language model

Date: 
June 12, 2024
Time: 
3 to 4 p.m. PT
Place: 
via Zoom

Tatsu Hashimoto, PhD
Assistant Professor of Computer Science, Stanford University

Language models (LMs) work well, but they are far from trustworthy. Major questions remain on high-stakes issues such as detecting benchmark contamination, identifying LM-generated text, (watermarking) and reliably generating factually correct outputs. Addressing these challenges will require us to build more precise, reliable algorithms and evaluations that provide guarantees we can trust. Hashimoto will discuss how, despite the complexity of these problems and the black-box nature of modern LLMs, in all three problems — benchmark contamination, watermarking, and factual correctness — there are surprising connections between classic statistical techniques and language modeling problems. Theses connections can lead to precise guarantees for identifying contamination, watermarking LM- generated text, and ensuring the correctness of LM outputs.

This is a Zoom only event - registration

Event Type: 
Biostatistics and Bioinformatics Seminar