Ajay Jain


Synthesizing visual worlds

I'm an AI researcher at UC Berkeley, where I work on generative models. My work includes diffusion models, text-to-3D with NeRFs, and scalable ML systems.

Previously, I was at Google Brain, NVIDIA Research, Uber ATG, and Facebook AI. I graduated from MIT with an S.B. in Computer Science and was a director of the nonprofit Machine Intelligence Community. At MIT, I did research at CSAIL and the Media Lab. My research is supported by the NSF Graduate Research Fellowship.

Email: ajayj at berkeley dot edu

News

New! July 2022: Dream Fields wins the Best Poster award at AI4CC
New! May 2022: AdaCat accepted to UAI 2022
March 2022: Dream Fields accepted to CVPR 2022
Aug 2021: ContraCode accepted to EMNLP 2021
July 2021: DietNeRF accepted to ICCV 2021

Featured Publications * indicates equal contribution

More publications

Journey to the BAOAB-limit: finding effective MCMC samplers for score-based models

Ajay Jain*, Ben Poole*
Score-based Methods @ NeurIPS 2022

Sample diffusion models with a single noise level + high diversity.

Adaptive Categorical Discretization for Autoregressive Models

Colin Li, Ajay Jain, Pieter Abbeel
UAI 2022 Conference on Uncertainty in Artificial Intelligence

AdaCat learns expressive autoregressive models by capturing fine-grained variation in continuous distributions with discrete density estimators.

Optimizing a flexible 1D AdaCat distribution.

Contrastive Code Representation Learning

Paras Jain*, Ajay Jain*, Tianjun Zhang, Pieter Abbeel, Joseph E. Gonzalez, Ion Stoica
EMNLP 2021 Empirical Methods in Natural Language Processing

Learn to represent software functionality for automated software engineering tasks like type inference, clone detection and summarization. Improving robustness of ML4Code.

Conceptual diagram illustrating Contrastive Code Representation Learning.

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis

Ajay Jain, Matthew Tancik, Pieter Abbeel
ICCV 2021 International Conference on Computer Vision

CLIP + NeRF: Given only a few images of an object or scene, we reconstruct its 3D structure & render novel views using prior knowledge contained in large image encoders.

Overview of DietNeRF's semantic consistency loss.

Sparse Graphical Memory for Robust Planning

Scott Emmons*, Ajay Jain*, Michael Laskin*, Thanard Kurutach, Pieter Abbeel, Deepak Pathak
NeurIPS 2020 34th Conference on Neural Information Processing Systems

Provably robust+efficient long-horizon monocular navigation combining sparse graph planning and RL. Propose two-way consistency to find landmark memories and create a topological map.

Cleaning up errors in the sparse graphical memory data structure.

Locally Masked Convolution for Autoregressive Models

Ajay Jain, Pieter Abbeel, Deepak Pathak
UAI 2020 36th Conference on Uncertainty in Artificial Intelligence

Outpainting with PixelCNNs. Our efficent op allows PixelCNNs to generate images in arbitrary orders.

Locally Masked Convolution applies customizable masks to patches of an image

Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization

Paras Jain*, Ajay Jain*, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Kurt Keutzer, Ion Stoica, Joseph E. Gonzalez
MLSys 2020 3rd Conference on Machine Learning and Systems

Use up to 5x less memory when training DNNs by recomputing activations

Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction

Ajay Jain*, Sergio Casas Romero*, Renjie Liao*, Yuwen Xiong*, Song Feng, Sean Segal, Raquel Urtasun
CoRL 2019 3rd Conference on Robot Learning, Spotlight talk

Multimodal, long-range behavior forecasts by predicting state marginals

DRF-Net 10 second forecast

Revec: Program Rejuvenation through Revectorization

Charith Mendis*, Ajay Jain*, Paras Jain, Saman Amarasinghe
CC 2019 28th International Conference on Compiler Construction

Achieve performance portability for hand-vectorized programs, with up to 1.88x speedup

Autonomy for Surface Ship Interception

C. Mirabito, D.N. Subramani, T. Lolla, J. P.J. Haley, A. Jain, P.F.J. Lermusiaux, C. Li, D. Yue, Y. Liu, F. Hover, N. Pulsone, J. Edwards, K. Railey, and G. Shaw
OCEANS 2017 60th OCEANS Conference, MTS/IEEE Aberdeen

Time-optimal path planning for underwater robots

Short papers

Learning Automatic Schedulers with Projective Reparameterization

Ajay Jain, Saman Amarasinghe
ISCA 2019 46th International Symposium on Computer Architecture
Workshop on Machine Learning for Systems, Jun 2019, Talk

Supervised learning of schedulers, with correctness constraints

Using effective dimension to analyze feature transformations in deep neural networks

Kavya Ravichandran, Ajay Jain, Alexander Rakhlin
ICML 2019 36th International Conference on Machine Learning
Workshop on Identifying and Understanding Deep Learning Phenomena, Jun 2019

The Case for GPU Multitenancy

Paras Jain, Xiangxi Mo, Ajay Jain, Alexey Tumanov, Joseph E. Gonzalez, Ion Stoica
arXiv 2019 arXiv:1910.02653, Jan 2019

Dynamic Space-Time scheduling for GPU inference

Paras Jain, Xiangxi Mo, Ajay Jain, Harikaran Subbaraj, Rehan Sohail Durrani, Alexey Tumanov, Joseph E. Gonzalez, and Ion Stoica
NeurIPS 2018 32nd Annual Conference on Neural Information Processing Systems
Workshop on Systems for Machine Learning, Dec 2018

Demonstrate 2.5x-4.9x speedups for deep learning inference workloads via GPU multitenancy

Invited talks

Service and teaching

Machine Learning Software

More Software