Tianhao Wang (王天浩)

Ph.D. student, Department of Statistics and Data Science, Yale University.

profile.jpeg

Department of S&DS
24 Hillhouse Avenue
New Haven, CT 06511
tianhao.wang@yale.edu

I am a final year Ph.D. student in the Department of Statistics and Data Science at Yale University. I am very fortunate to be advised by Prof. Zhou Fan. I am broadly interested in various aspects of statistics and machine learning theory, with a recent focus on AMP algorithms and implicit bias of optimization algorithms.

Prior to Yale, I obtained my Bachelor’s degree in mathematics with a dual degree in computer science at University of Science and Technology of China.

CV


Selected recent papers by topics (*: equal contribution)

Approximate Message Passing algorithms
  1. Approximate Message Passing for orthogonally invariant ensembles: Multivariate non-linearities and spectral initialization
    Xinyi Zhong*, Tianhao Wang*, and Zhou Fan
    Submitted to Information and Inference, minor revision. arXiv:2110.02318, 2021
  2. Universality of Approximate Message Passing algorithms and tensor networks
    Tianhao Wang, Xinyi Zhong, and Zhou Fan
    The Annals of Applied Probability, to appear
Implicit bias of optimization algorithms
  1. The Marginal Value of Momentum for Small Learning Rate SGD
    Runzhe Wang, Sadhika Malladi, Tianhao Wang, Kaifeng Lyu, and Zhiyuan Li
    In International Conference on Learning Representations (ICLR), 2024
  2. Fast mixing of stochastic gradient descent with normalization and weight decay
    Zhiyuan Li, Tianhao Wang, and Dingli Yu
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
  3. Implicit bias of gradient descent on reparametrized models: On equivalence to mirror descent
    Zhiyuan Li*, Tianhao Wang*, Jason D. Lee, and Sanjeev Arora
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
    Abridged version accepted for a contributed talk to ICML 2022 Workshop on Continuous time methods for machine learning
  4. What happens after SGD reaches zero loss?--A mathematical framework
    Zhiyuan Li, Tianhao Wang, and Sanjeev Arora
    In International Conference on Learning Representations (ICLR), 2022  (Spotlight)
Data-driven decision-making problems
  1. Noise-adaptive Thompson sampling for linear contextual bandits
    Ruitu Xu, Yifei Min, and Tianhao Wang
    In Advances in Neural Information Processing Systems (NeurIPS), 2023
  2. Learn to match with no regret: Reinforcement learning in Markov matching markets
    Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I Jordan, and Zhuoran Yang
    In Advances in Neural Information Processing Systems (NeurIPS), 2022  (Oral)
  3. A simple and provably efficient algorithm for asynchronous federated contextual linear bandits
    Jiafan He*, Tianhao Wang*, Yifei Min*, and Quanquan Gu
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
  4. Variance-aware off-policy evaluation with linear function approximation
    Yifei Min*, Tianhao Wang*, Dongruo Zhou, and Quanquan Gu
    In Advances in neural information processing systems (NeurIPS), 2021
  5. Provably efficient reinforcement learning with linear function approximation under adaptivity constraints
    Tianhao Wang*, Dongruo Zhou*, and Quanquan Gu
    In Advances in Neural Information Processing Systems (NeurIPS), 2021
Orbit recovery model
  1. Maximum likelihood for high-noise group orbit estimation and single-particle cryo-EM
    Zhou Fan, Roy R. Lederman, Yi Sun, Tianhao Wang, and Sheng Xu
    The Annals of Statistics, to appear
  2. Likelihood landscape and maximum likelihood estimation for the discrete orbit recovery model
    Zhou Fan, Yi Sun, Tianhao Wang, and Yihong Wu
    Communications on Pure and Applied Mathematics, 2022