Research
I'm interested in scaling multi-agent approaches to improve downstream task performance and expanding the real-world use cases of agentic workflows, as well as exploring critical areas of LLM safety through interpretability.
|
|
VMDT: Decoding the Trustworthiness of Video Foundation Models
Yujin Potter*, Zhun Wang*, Nicholas Crispino*, Kyle Montgomery*, Alexander Xiong*, Ethan Y. Chang, Francesco Pinto, Yuqi Chen, Rahul Gupta, Morteza Ziyadi, Christos Christodoulopoulos, Bo Li, Chenguang Wang, Dawn Song
in Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)., 2025
arxiv
/
code
/
dataset
VMDT is a benchmark for evaluating the trustworthiness of text-to-video (T2V) and video-to-text (V2T) models across five key dimensions: safety, hallucination, fairness, privacy, and adversarial robustness.
|
|
SteeringControl: Holistic Evaluation of Alignment Steering in LLMs
Vincent Siu*, Nicholas Crispino*, David Park, Nathan W. Henry, Zhun Wang, Yang Liu, Dawn Song, Chenguang Wang
in arXiv:2509.13450, 2025
arxiv
/
code
/
dataset
SteeringControl is a benchmark measuring the behavioral entanglement that occurs on out of distribution datasets when using current representation steering methods to improve performance on a single behavior, along with a modular code framework allowing for standardization of training-free steering methods.
|
|
RepIt: Representing Isolated Targets to Steer Language Models
Vincent Siu, Nathan W. Henry, Nicholas Crispino, Yang Liu, Dawn Song, Chenguang Wang
in arXiv:2509.13281, 2025
arxiv
/
code
RepIt isolates concept-specific representations of refusal that can be used to suppress refusal on target concepts while preserving the baseline modelβs refusal on other concepts using steering methods in interpretability.
|
|
COSMIC: Generalized Refusal Identification in LLM Activations
Vincent Siu, Nicholas Crispino, Zihao Yu, Sam Pan, Zhun Wang, Yang Liu, Dawn Song, Chenguang Wang
in Findings of the Association for Computational Linguistics (ACL), 2025
arxiv
/
code
COSMIC improves the direction selection step of steering refusal in LLMs by choosing a direction maximizing the cosine similarity between paired harmless and harmful behavior, allowing for the more robust application of activation steering.
|
|
MLAN: Language-Based Instruction Tuning Improves Zero-Shot Generalization of Multimodal Large Language Models
Jianhong Tu, Zhuohao Ni, Nicholas Crispino, Zihao Yu, Michael Bendersky, Beliz Gunel, Ruoxi Jia, Xin Liu, Lingjuan Lyu, Dawn Song, Chenguang Wang
in Proceedings of the 3rd Workshop on Towards Knowledgeable Foundation Models (KnowFM @ ACL), 2025
arxiv
/
code
MLAN proposes focusing on text-only instances in instruction tuning to improve instruction following in both the vision and language modalities in multimodal large language models at a lower cost than traditional vision-only or vision-based methods.
|
|
Humanityβs Last Exam
Long Phan, ..., Wenjin Zhang, Nick Crispino, Chenguang Wang, Daofeng Li, Jiawei Shen, Kyle Montgomery, Hannah Szlyk, Ting Wang, Summer Yue, Alexandr Wang, Dan Hendrycks, many others.
in arXiv:2501.14249, 2025
arxiv
/
code
/
dataset
HLE is a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage
|
|
Agent Instructs Large Language Models to be General Zero-Shot Reasoners
Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, Chenguang Wang
in Forty-first International Conference on Machine Learning (ICML), 2024
arxiv
/
code
Zero-shot AgentInstruct uses an agent to generate dataset-specific instructions to improve the zero-shot performance of instruction-following large language models.
|
Open Source Projects
Contributing to open source projects that advance LLM capabilities and allow research ideas to be shared with the broader community.
|
|
MassGen - Multi-Agent LLM Scaling Framework
Core Contributor
GitHub
I contribute to MassGen, an open-source framework for scaling multi-agent LLM systems. My work focuses on designing and implementing new features that enhance the framework's capabilities for agent collaboration and scalability.
|
|