I am a research scientist at Google DeepMind. I work on machine learning, with an emphasis on:
- Generative modeling: using probabilistic methods to capture the structure and uncertainty inherent in data
- Algorithmic modeling: building models that efficiently encode dependencies and extract meaningful representations
I’m particularly interested in addressing challenging inferential questions at the intersection of these two areas. Some of the topics I investigate include variational inference, sampling, gradient estimation, and score-based modeling.
- Check out our new SOTA convolutional sequence modeling architecture.
- I am an area chair for NeurIPS 2023.
- Our work on gradient estimation for discrete distributions won the NeurIPS 2022 Outstanding Paper Award!
- I am an area chair for AISTATS 2023.
- I am a top reviewer for NeurIPS 2022.
Sequence Modeling and MultiresConv Architecture
Sequence Modeling with Multiresolution Convolutional Memory
Jiaxin Shi, Ke Alexander Wang, Emily B. Fox.
Probabilistic Inference and Gradient Estimation
A Finite-Particle Convergence Rate for Stein Variational Gradient Descent
Jiaxin Shi, Lester Mackey.
Gradient Estimation with Discrete Stein Operators
Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis K. Titsias, Lester Mackey.
NeurIPS 2022 Outstanding Paper Award.
Double Control Variates for Gradient Estimation in Discrete Latent Variable Models
Michalis K. Titsias, Jiaxin Shi.
Sampling with Mirrored Stein Operators
Jiaxin Shi, Chang Liu, Lester Mackey.
Spotlight Presentation (top 5.1%).
A Spectral Approach to Gradient Estimation for Implicit Distributions
Jiaxin Shi, Shengyang Sun, Jun Zhu.
Nonparametric Score Estimators
Yuhao Zhou, Jiaxin Shi, Jun Zhu.
Sliced Score Matching: A Scalable Approach to Density and Score Estimation
Yang Song*, Sahaj Garg*, Jiaxin Shi, Stefano Ermon.
Oral Presentation (top 8.7%).
NeuralEF: Deconstructing Kernels by Deep Neural Networks
Zhijie Deng, Jiaxin Shi, Jun Zhu.
Neural Eigenfunctions Are Structured Representation Learners
Zhijie Deng*, Jiaxin Shi*, Hao Zhang, Peng Cui, Cewu Lu, Jun Zhu.
Neural Networks as Inter-domain Inducing Points
Shengyang Sun*, Jiaxin Shi*, Roger Grosse.
Predictive Uncertainty Estimation
Sparse Orthogonal Variational Inference for Gaussian Processes
Jiaxin Shi, Michalis K. Titsias, Andriy Mnih.
Best Student Paper Runner-Up at AABI Symposium, 2019.
Scalable Variational Gaussian Processes via Harmonic Kernel Decomposition
Shengyang Sun, Jiaxin Shi, Andrew Gordon Wilson, Roger Grosse.
Functional Variational Bayesian Neural Networks
Shengyang Sun*, Guodong Zhang*, Jiaxin Shi*, Roger Grosse.