Ajay Jain


Doctoral student (Ph.D.)

Berkeley Artificial Intelligence Research (BAIR)
Department of Electrical Engineering and Computer Science (EECS)
UC Berkeley

Email: ajayj at berkeley dot edu

Biography

I'm a second-year PhD student at UC Berkeley, affiliated with Berkeley Artificial Intelligence Research and a recipient of the NSF Graduate Research Fellowship. My research studies the problem of unsupervised machine learning: How can machines automatically recognize, reconstruct, and generate the structure within raw data?

I've been fortunate to work with really great people along the way. At Berkeley, I am advised by Pieter Abbeel, and often collaborate with Joseph Gonzalez and Ion Stoica. I graduated from MIT with an S.B. in Computer Science, where I was advised by Saman Amarasinghe in the MIT COMMIT lab. At Uber ATG, I worked with Raquel Urtasun.

Previously, I led the MIT Machine Intelligence Community and organized and wrote OSS for the HackMIT hackathon. These days, I enjoy hiking, board games, and the ukulele.

News

New! September 2020: Two papers on generative models and RL accepted to NeurIPS 2020
May 2020: LMConv paper on image generation accepted to UAI 2020
January 2020: Checkmate paper on memory-efficient DL accepted to MLSys 2020
September 2019: DRF-Net paper on behavior forecasting accepted to CoRL 2019
June 2019: Presented a talk, paper at ISCA 2019 Workshop on Machine Learning for Systems
June 2019: Poster at ICML 2019 Workshop on Phenomena of Deep Learning
June 2019: Started my PhD at UC Berkeley
Dec 2018: Revec accepted to Compiler Construction 2019
June 2018: Headed to Toronto for a research internship at Uber Advanced Technologies Group
June 2017: Joining Facebook's Applied Machine Learning team for the summer

Preprints * indicates equal contribution

Conceptual diagram illustrating Contrastive Code Representation Learning.

Contrastive Code Representation Learning

Paras Jain*, Ajay Jain*, Tianjun Zhang, Pieter Abbeel, Joseph E. Gonzalez, Ion Stoica
arXiv:2007.04973, July 2020

Publications

Sampling four 256x256 px photos of faces from a Denoising Diffusion Probabilistic Model.

Denoising Diffusion Probabilistic Models

Jonathan Ho, Ajay Jain, Pieter Abbeel
34th Conference on Neural Information Processing Systems (NeurIPS) 2020

High-quality likelihood-based image generation; connect diffusion models to denoising score matching and Langevin dynamics; compression, reconstruction and interpolation

Cleaning up errors in the sparse graphical memory data structure.

Sparse Graphical Memory for Robust Planning

Scott Emmons*, Ajay Jain*, Michael Laskin*, Thanard Kurutach, Pieter Abbeel, Deepak Pathak
34th Conference on Neural Information Processing Systems (NeurIPS) 2020

Provably robust+efficient long-horizon monocular navigation combining sparse graph planning and RL. Propose two-way consistency to find landmark memories and create a topological map.

Locally Masked Convolution applies customizable masks to patches of an image

Locally Masked Convolution for Autoregressive Models

Ajay Jain, Pieter Abbeel, Deepak Pathak
36th Conference on Uncertainty in AI (UAI), August 2020

Efficent op allows PixelCNNs to generate images in arbitrary orders

Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization

Paras Jain*, Ajay Jain*, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Kurt Keutzer, Ion Stoica, Joseph E. Gonzalez
3rd Conference on Machine Learning and Systems (MLSys), Mar 2020

Use up to 5x less memory when training DNNs by recomputing activations

DRF-Net 10 second forecast

Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction

Ajay Jain*, Sergio Casas Romero*, Renjie Liao*, Yuwen Xiong*, Song Feng, Sean Segal, Raquel Urtasun
3rd Conference on Robot Learning (CoRL), Oct 2019

Multimodal, long-range behavior forecasts by predicting state marginals

Revec: Program Rejuvenation through Revectorization

Charith Mendis*, Ajay Jain*, Paras Jain, Saman Amarasinghe
28th International Conference on Compiler Construction (CC), Feb 2019

Achieve performance portability for hand-vectorized programs, with up to 1.88x speedup

Autonomy for Surface Ship Interception

C. Mirabito, D.N. Subramani, T. Lolla, J. P.J. Haley, A. Jain, P.F.J. Lermusiaux, C. Li, D. Yue, Y. Liu, F. Hover, N. Pulsone, J. Edwards, K. Railey, and G. Shaw
60th OCEANS Conference, Oceans MTS/IEEE Aberdeen, June 2017

Time-optimal path planning for underwater robot

Short papers

Learning Automatic Schedulers with Projective Reparameterization

Ajay Jain, Saman Amarasinghe
The 46th International Symposium on Computer Architecture (ISCA)
Workshop on Machine Learning for Systems, Jun 2019, Talk

Supervised learning of schedulers, with correctness constraints

Using effective dimension to analyze feature transformations in deep neural networks

Kavya Ravichandran, Ajay Jain, Alexander Rakhlin
The 36th International Conference on Machine Learning (ICML)
Workshop on Identifying and Understanding Deep Learning Phenomena, Jun 2019

The Case for GPU Multitenancy

Paras Jain, Xiangxi Mo, Ajay Jain, Alexey Tumanov, Joseph E. Gonzalez, Ion Stoica
arXiv:1910.02653, Jan 2019

Dynamic Space-Time scheduling for GPU inference

Paras Jain, Xiangxi Mo, Ajay Jain, Harikaran Subbaraj, Rehan Sohail Durrani, Alexey Tumanov, Joseph E. Gonzalez, and Ion Stoica
The 32nd Annual Conference on Neural Information Processing Systems (NeurIPS)
Workshop on Systems for Machine Learning, Dec 2018

Demonstrate 2.5x-4.9x speedups for deep learning inference workloads via GPU multitenancy