Dan Friedman
Research Interests
I'm currently interested in making large, neural language models easier to understand.
One direction is to design models that are
inherently interpretable,
so that we can automatically convert models into formats that are easier to inspect and understand, such as discrete computer programs.
I'm also interested in approaches that take a more behavioral view, to better characterize the strengths and limitations of large language models
(example 1;
example 2).
Some of my more general interests include unsupervised structure learning, formal languages,
probabilistic models, and inductive bias.
I'm also interested in applications of NLP to humanities research
and am involved with the
Princeton Center for Digital Humanities
.
Contact
I'm on GitHub,
Twitter,
and email.
Publications
-
Learning Transformer Programs.
Dan Friedman, Alexander Wettig, Danqi Chen
NeurIPS 2023 Oral (to appear)
-
Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve.
R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, Thomas L. Griffiths
arXiv 2023
-
Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations.
Chenglei Si*, Dan Friedman*, Nitish Joshi, Shi Feng, Danqi Chen, He He
ACL 2023
-
The Vendi Score: A Diversity Evaluation Metric for Machine Learning.
Dan Friedman, Adji Bousso Dieng
Transactions of Machine Learning Research (TMLR) 2023
-
Finding Dataset Shortcuts with Grammar Induction.
Dan Friedman, Alexander Wetting, Danqi Chen
EMNLP 2022
-
Single-dataset Experts for Multi-dataset Question Answering.
Dan Friedman, Ben Dodge, Danqi Chen
EMNLP 2021
-
Factual Probing is [MASK]: Learning vs. Learning
to Recall.
Zexuan Zhong*, Dan Friedman*, Danqi Chen
NAACL 2021
-
Syntax-aware Neural Semantic Role Labeling with Supertags
Jungo Kasai, Dan Friedman, Robert Frank,
Dragomir Radev, Owen Rambow
NAACL 2019
-
ScisummNet: A Large Annotated Corpus and Content-Impact
Models for Scientific Paper Summarization with Citation
Networks
Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R
Fabbri, Irene Li, Dan Friedman, Dragomir R
Radev
AAAI 2019
-
Linguistically Rich Vector Representations of Supertags for
TAG Parsing
Dan Friedman*, Jungo
Kasai*, R Thomas McCoy*, Robert Frank,
Forrest Davis, Owen Rambow
Proceedings of the 13th International Workshop on Tree Adjoining
Grammars and Related Formalisms
(2017)