Welcome to Fanghui Liu (刘方辉)'s Homepage!

alt text

Fanghui Liu

Assistant Professor at Department of Computer Science, University of Warwick, UK

Email: x@y with x=fanghui.liu and y=warwick.ac.uk

Member of Centre for Discrete Mathematics and its Applications (DIMAP), Warwick

Visiting research scientist at EPFL, Switzerland
Email: x@y with x=fanghui.liu and y=epfl.ch

[Google Scholar] [homepage at Warwick] [speaker bio]

at Zermatt, Switzerland (Aug. 2022)

About me

I'm currently an assistant professor at Department of Computer Science, a member of Centre for Discrete Mathematics and its Applications (DIMAP), an affiliated member of the Division of Theory and Foundations (FoCS), at the University of Warwick, UK. Besides, I'm also a visiting research scientist at EPFL, Switzerland. I am an ELLIS Member. I am a co-organizer of Warwick Foundation of AI seminar.

Previously, I was a postdoc researcher at EPFL, Switzerland, hosted by Prof. Volkan Cevher from 2021 to 2023. Before that, I spent two years as a postdoc researcher at ESAT-STADIUS, KU Leuven, Belgium, hosted by Prof. Johan A.K. Suykens.

I obtained the Ph.D degree from Institute of Image Processing Pattern Recognition in Department of Automation, Shanghai Jiao Tong University (SJTU) in June 2019, supervised by Prof. Jie Yang, and the B.E. degree in Automation, Harbin Institute of Technology (HIT) in 2014.

Jobs

I am looking for motivated Ph.D. students to work with me on machine learning theory or theoretical-oriented application topics. See Open positions for details.

Due to a large number of requests, I, unfortunately, may not be able to reply to all the emails regarding PhD applications. However, I'll look at applicants that have sent me emails and contact you soon if your enquires are indeed inline with my research.

Research Interests

I'm generally interested in mathematical foundations of machine learning, e.g., statistical learning theory and deep learning theory of under-/over-parameterized models. Currently I'm also interested in reinforcement learning theory, especially on function approximation, from classical statistical learning to sequential decision making.

My research interest starts from kernel methods to large-scale computational methodologies in algorithm, and mainly focuses on theoretically understanding generalization properties of machine learning based algorithms, especially on over-parameterized models (motivated by neural networks). In fact, my research line can be understood from a function space theory perspective, from RKHS to hyper-RKHS, Barron spaces, Besov spaces. This aims to understand the role of over-parameterization from kernel methods to neural networks.

Refer to my Research Statement if you're interested in.

  • reinforcement leaning theory, e.g., function approximation [NeurIPS22, ICML23].

Besides, I also spend time on student projects in theoretical-oriented application topics, e.g., robustness and verification [NeurIPS22, NeurIPS22], neural architecture search [NeurIPS22], and out-of-distribution generalization [NeurIPS22], which aims to build trustworthy machine learning systems.

News

  • [2023-04] We will give a tutorial entitled Scaling and Reliability Foundations in Machine Learning at 2024 IEEE International Symposium on Information Theory (ISIT) in Athens, Greece at July.

  • [2024-04] Awarded the DAAD AInet Fellowship, which is awarded to outstanding early career researchers. Topic: Safety and Security in AI.

  • [2024-01] Three papers were accepted by ICLR 2024: generalization of ResNets; robust NAS from benchmark to theory; local-linearity for catastrophic overfitting. I will be at Vienna. Feel free to chat.

  • [2023-10] Join in University of Warwick as an assistant professor.

  • [2023-09] Two papers were accepted by NeurIPS 2023: one is about global convergence of Transformers; the other one is how over-parameterization affects differential privacy. I will be at New Orleans (again and again). Feel free to chat.

  • [2023-04] Two papers were accepted by ICML 2023: one is about function approximation in online RL; the other one is related to benign overfitting. I will be at Hawaii! Feel free to chat.

  • [2023-02] We will give a tutorial entitled Deep learning theory for computer vision at IEEE CVPR 2023 in Vancouver, Canada.

  • [2022-12] We will give a tutorial entitled Neural networks: the good, the bad, the ugly at IEEE ICASSP 2023 in the Greek island of Rhodes.

  • [2022-09] Six papers were accepted by NeurIPS 2022.

  • [2021-10] One paper on double descent of RFF with SGD was posted on arXiv.

  • [2021-10] One paper was accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • [2021-07] One paper was accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • [2021-06] One paper was accepted by Journal of Machine Learning Research.

  • [2021-02] One paper was accepted by Machine Learning.

  • [2021-01] Two papers were accepted by AISTATS 2021.

  • [2020-10] One paper was accepted by Journal of Machine Learning Research.

Visitors