Content

Speaker: Nicholas Tomlin (UC Berkeley)

Abstract: Language models are primarily trained via imitation on massive amounts of human data; as a result, they’re capable of performing a wide range of tasks but often lack the deep reasoning capabilities of classic AI systems like Deep Blue and AlphaGo. In this talk, I’ll first present core technical challenges related to “reasoning with language,” using my work on computer crossword solvers as a running example. Then, I’ll show how methods for “interactive reasoning” can enable human-AI teams to solve complex problems jointly. Finally, I’ll discuss my work on “explainable reasoning,” where the goal is to explain the decisions made by expert AI systems like AlphaGo in human-interpretable terms. I will conclude by sharing my views on the future of language model reasoning, agents, and interactive systems.

Bio: Nicholas Tomlin is a final-year PhD student in the Berkeley NLP Group, where he is advised by Dan Klein. His work focuses primarily on reasoning and multi-agent interaction with language models. He has co-created systems such as The Berkeley Crossword Solver, the first superhuman computer crossword solver, as well as Ghostbuster, a state-of-the-art method for LLM detection. His work has been supported by grants from the NSF and FAR AI and has received media coverage from outlets such as Discover, Wired, and the BBC.

In person event posted in Research