ALLeGRo Lab

The AI, Language, Learning, Generalization, and Robustness (ALLeGRo) Lab is part of the Thomas Lord Department of Computer Science at the University of Southern California, led by Robin Jia. We study natural language processing and machine learning. Click on a research area to highlight our students working in that area:

News

Feb 2026 Johnny gave a talk at the Stanford NLP Seminar! Title:The Shape of AI Accountability and Its Contours in Copyright.
Jan 2026 Hubble and FoNE has been accepted to ICLR 2026!
May 2025 Deqing gave a talk at the Stanford NLP Seminar! Title:Closing the Modality Gap: Benchmarking and Improving Visual Understanding in Multimodal LLMs.

People

Faculty

Photo
Robin Jia
Assistant Professor

PhD Students

Photo
Johnny Tian-Zheng Wei
PhD Student
Hubble: a Model Suite to Advance the Study of LLM Memorization. ICLR 2026. [paper]
Photo
Student Two
PhD Student
Yet Another Paper Title. EMNLP 2023. [paper]
Photo
Wang Bill Zhu
PhD Student
Photo
Deqing Fu
PhD Student
Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression. NeurIPS 2024. [paper]

MS Students

Photo
Jerry Li
MS Student
Are LLMs Reliable Rankers? Rank Manipulation via Two-Stage Token Optimization. arxiv. [paper]

Undergraduate Students

Alumni

Photo
Ryan Wang
Undergraduate → PhD at UC Berkeley
Proving membership in LLM pretraining data via data watermarks. Findings of ACL 2024. [paper]
Photo
Lorena Yan
Undergraduate → PhD at Columbia University
Promote, Suppress, Iterate: How Language Models Answer One-to-Many Factual Queries. EMNLP 2025. [paper]
Photo
Qilin Ye
Undergraduate → MS at Duke University
When Do Transformers Learn Heuristics for Graph Connectivity? [paper]
Photo
Harvey Yiyun Fu
Undergraduate → PhD at UChicago
Estimating Large Language Model Capabilities without Labeled Test Data. Findings of EMNLP 2023. [paper]

Projects