
VAISHNAVH NAGARAJAN
Research Scientist
Google, New York
E-Mail: vaishnavh at google.com
He/Him/His
ABOUT ME
I am a research scientist at Google. I am interested in the theoretical foundations of machine learning. I’m particularly excited about understanding when and why modern machine learning algorithms work (or do not work). While my PhD thesis was on explaining why deep networks generalize well, my other works includes when/why models fail outside of distribution, and when/why GANs converge to the desired saddle point.
Prior to this I graduated with a PhD from the Computer Science Department of Carnegie Mellon University (CMU). I was extremely fortunate to be advised by Zico Kolter. I completed my undergraduate studies in the Department of Computer Science and Engineering at the Indian Institute of Technology, Chennai, India. Here I was advised by Balaraman Ravindran with whom I worked in reinforcement learning.
UPDATES
- Jan 2022: Spotlight paper at ICLR ‘22 on a technique that uses unlabeled data to predict the generalization error of deep networks with remarkable precision!.
- Oct 2021: Excited to join as a research scientist at Google NYC!
- Jan 2021: Two papers accepted at ICLR ‘21, one on out-of-distribution generalization and the other on local explanability.
PUBLICATIONS (Google Scholar)
Full Conference Papers
- Assessing Generalization via Disagreement,
International Conference on Learning Representations (ICLR) 2022,
(Double first author) Yiding Jiang*, Vaishnavh Nagarajan*, Christina Baek, J. Zico Kolter
Accepted for Spotlight presentation
[arxiv] [Poster]- Also accepted at ICML ‘21 Workshop on Overparameterization: Pitfalls & Opportunities
- Understanding the failure modes of out-of-distribution generalization,
International Conference on Learning Representations (ICLR) 2021,
Vaishnavh Nagarajan, Anders Andreassen and Behnam Neyshabur
[arxiv] [Poster]- Invited poster presentation at Conceptual Understanding of Deep Learning Workshop, Google Algorithms Workshop Series, 2021.
- A Learning Theoretic Perspective on Local Explainability,
International Conference on Learning Representations (ICLR) 2021,
(Double first author) Jeffrey Li*, Vaishnavh Nagarajan*, Gregory Plumb and Ameet Talwalkar
[arxiv] [Poster]
- Provably Safe PAC-MDP exploration using analogies,
International Conference on Artificial Intelligence and Statistics (AISTATS) 2021
Melrose Roderick, Vaishnavh Nagarajan and J. Zico Kolter
[arxiv]
- Uniform convergence may be unable to explain generalization in deep learning,
Neural Information Processing Systems (NeurIPS) 2019
Vaishnavh Nagarajan and J. Zico Kolter
Winner of The Outstanding New Directions Paper Award
Accepted for Oral presentation, 0.54% acceptance
[arxiv] [NeurIPS 19 oral slides] [Poster] [Blogpost] [Code] [Errata]
Also accepted for spotlight talk at:- ICML ‘19 Workshop on Understanding and Improving Generalization in Deep Learning
- IAS/Princeton Workshop on Theory of Deep Learning. [Video]
- Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience,
International Conference of Learning Representations (ICLR) 2019
Vaishnavh Nagarajan and J. Zico Kolter
[Openreview] [Poster] [Errata]
- Gradient descent GAN optimization is locally stable,
Neural Information Processing Systems (NeurIPS) 2017
Vaishnavh Nagarajan and J. Zico Kolter
Accepted for Oral presentation, 1.2% acceptance
[arxiv] [1hr talk - slides] [NeurIPS Oral - Slides] [Poster] [3 min video] [Code]
- Lifelong Learning in Costly Feature Spaces,
Algorithmic Learning Theory (ALT) 2017
with Maria-Florina Balcan and Avrim Blum
Also an invited journal publication in Theoretical Computer Science (TCS)
[arxiv][Slides]
- Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems,
Conference On Learning Theory (COLT), 2017
with Maria-Florina Balcan, Ellen Vitercik and Colin White
[arxiv] [Slides] [Talk]
- Every team deserves a second chance: Identifying when things go wrong,
Autonomous Agents and Multiagent Systems (AAMAS) 2015
(Double 1st author) Vaishnavh Nagarajan*, Leandro S. Marcolino* and Milind Tambe
[PDF] [Appendix]
Workshop Papers
- Theoretical Insights into Memorization in GANs,
Neural Information Processing Systems (NeurIPS) 2017 - Integration of Deep Learning Theories Workshop
Vaishnavh Nagarajan, Colin Raffel, Ian Goodfellow.
[PDF]
- Generalization in Deep Networks: The Role of
Distance from Initialization,
Neural Information Processing Systems (NeurIPS) 2017 - Deep Learning: Bridging Theory and Practice
Vaishnavh Nagarajan and J. Zico Kolter.
Accepted for Spotlight talk
[arxiv] [Poster]
- A Reinforcement Learning Approach to Online Learning of Decision Trees,
European Workshop on Reinforcement Learning (EWRL 2015 - ICML)
(Triple 1st author) Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan and Balaraman Ravindran.
[arxiv]
- Knows-What-It-Knows Inverse Reinforcement Learning,
Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM) 2015
Vaishnavh Nagarajan and Balaraman Ravindran
[PDF]
Thesis
- Explaining generalization in deep learning: progress and fundamental limits,
Vaishnavh Nagarajan
[arxiv]
Peer Review
- ICLR 2021 (outstanding reviewer award)
- NeurIPS 2020 (top 10%), 2019 (top 50%), 2018 (top 30%)
- ICML 2022, 2021 (Expert reviewier, top 10%), 2020 (top 33%), 2019 (top 5%)
- ALT 2021
- COLT 2019
- AISTATS 2019
- ICML 21 OPPO Workshop
- JMLR
Last Updated: May 20th, 2022