Dehua Cheng

Dehua Cheng

Research Scientist

Facebook AI Applied Research

Email: chengdehua7197_AT_gmail_DOT_com

My name is Dehua Cheng (程德华). I am currently a research scientist at Facebook AI Applied Research, working on the developing cutting-edge machine learning solutions for industry-level personalization problems. Our work ranges from content understanding, large-scale model development, model understanding, AutoML and more.

I graduated with a PhD degree in the Computer Science Department, of University of Southern California. I work on Machine Learning under supervision of Prof. Yan Liu. My primary research interest lies in large scale machine learning. More specifically, I am interested in developing linear or sublinear numerical routines for machine learning algorithms by exploiting randomization and the structures of both the machine learning model and data. I have also worked on parallel inference for probabilistic graphical model, including topic models, Bayesian nonparametrics, etc. And in general, I am interested on (1) designing computational efficient models, and (2) improving existing machine learning algorithms from the computational efficiency aspect. My CV can be found here.

Experience

Jan. 2018 - present

Research Scientist
Facebook AI Applied Research.

May 2017 - Aug. 2017

Software Engineer Intern
Feed Machine Learning @ Facebook
Advisor: Qichao Que

May 2016 - Aug. 2016

Research Intern
Thomas J Watson Research Center
IBM Research
Advisor: Jie Chen

Education

Aug. 2012 - present

Ph.D.
Computer Science Department
University of Southern California
Advisor: Yan Liu

Aug. 2008 - Jul. 2012

B.S.
Mathematics and Physics
Tsinghua University
Thesis advisor: Changshui Zhang

Publications

Refereed Publications

Michael Tsang, Dehua Cheng, Hanpeng Liu, Xue Feng, Eric Zhou, and Yan Liu, Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection, ICLR 2020.

Geng Ji, Dehua Cheng, Huazhong Ning, Changhe Yuan, Hanning Zhou, Liang Xiong, and Erik B. Sudderth, Variational Training for Large-Scale Noisy-OR Bayesian Networks, UAI 2019.

Michael Tsang, Dehua Cheng, and Yan Liu, Detecting Statistical Interactions from Neural Network Weights, ICLR 2018. [PDF]

Dehua Cheng*, Natali Ruchansky*, and Yan Liu (*Equal Contributions), Matrix completability analysis via graph k-connectivity, AISTATS 2018. [Code]

Dehua Cheng, Richard Peng, Ioakeim Perros, and Yan Liu, SPALS: Fast Alternating Least Squares via Implicit Leverage Scores Sampling, NIPS 2016. [PDF] [Code]

Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, and Shang-Hua Teng, Efficient Sampling for Gaussian Graphical Models via Spectral Sparsification, COLT 2015. [PDF].

Qi Yu, Dehua Cheng, and Yan Liu, Accelerated Online Low Rank Tensor Learning for Multivariate Spatiotemporal Streams, ICML 2015. [PDF]

Dehua Cheng*, Xinran He*, and Yan Liu (*Equal Contributions), Model Selection for Topic Models via Spectral Decomposition, AISTATS 2015. [PDF]

Dehua Cheng, Mohammad Taha Bahadori, and Yan Liu, FBLG: A Simple and Effective Approach for Temporal Dependence Discovery from Time Series Data, KDD 2014. [PDF]

Dehua Cheng, and Yan Liu, Parallel Gibbs Sampling for Hierarchical Dirichlet Processes via Gamma Processes Equivalence, KDD 2014. [PDF]

Preprints

Jie Chen, Dehua Cheng, Yan Liu, On Bochner's and Polya's Characterizations of Positive-Definite Kernels and the Respective Random Feature Maps, arXiv:1610.08861

Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, and Shang-Hua Teng, Spectral Sparsification of Random-Walk Matrix Polynomials, arXiv:1502.03496.

Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, and Shang-Hua Teng, Scalable Parallel Factorizations of SDD Matrices and Efficient Sampling for Gaussian Graphical Models, arXiv:1410.5392.

Services

Student volunteer, NIPS ’16

Workshop organizer, MiLeTs @ KDD ’16

Student volunteer, ICML ’15

Student volunteer, KDD ’14