I'm a fifth-year PhD student in the Princeton NLP group, working with Danqi Chen. For the summer and fall of 2023, I was a student researcher at Google Research, where I was advised by Asma Ghandeharioun and Andrew Lampinen. Before this I was a software engineer at Google and IBM, and before that I was an undergraduate at Yale where I worked with Bob Frank and Dragomir Radev, and received a BA in English.
I'm currently interested in making large, neural language models easier to understand. One direction I'm especially interested in is to design models that are inherently interpretable, so that we can automatically convert models into formats that are easier to inspect and understand, such as discrete computer programs. I'm also interested in approaches that take a more behavioral view, to better characterize the strengths and limitations of large language models (example 1; example 2). Some of my more general interests include unsupervised structure learning, formal languages, probabilistic models, and inductive bias. I'm also interested in applications of NLP to humanities research and am involved with the Princeton Center for Digital Humanities .