Welcome to Fanghui Liu (刘方辉)'s Homepage!

alt text

Fanghui Liu

Assistant Professor at Department of Computer Science, University of Warwick, UK

Email: x@y with x=fanghui.liu and y=warwick.ac.uk

Member of Centre for Discrete Mathematics and its Applications (DIMAP), Warwick

Visiting research scientist at EPFL, Switzerland
Email: x@y with x=fanghui.liu and y=epfl.ch

[Google Scholar] [homepage at Warwick] [speaker bio]

at Zermatt, Switzerland (Aug. 2022)

About me

I'm currently an assistant professor at Department of Computer Science, a member of Centre for Discrete Mathematics and its Applications (DIMAP), an affiliated member of the Division of Theory and Foundations (FoCS), at the University of Warwick, UK. Besides, I'm also a visiting research scientist at EPFL, Switzerland. I am an ELLIS Member and IEEE Senior Member.

We're organising Warwick Foundation of AI seminar!

We're organising one workshop Fine-Tuning in Modern Machine Learning: Principles and Scalability at NeurIPS 2024!

Previously, I was a postdoc researcher at EPFL, Switzerland, hosted by Prof. Volkan Cevher from 2021 to 2023. Before that, I spent two years as a postdoc researcher at ESAT-STADIUS, KU Leuven, Belgium, hosted by Prof. Johan A.K. Suykens.

I obtained the Ph.D degree from Institute of Image Processing Pattern Recognition in Department of Automation, Shanghai Jiao Tong University (SJTU) in June 2019, supervised by Prof. Jie Yang, and the B.E. degree in Automation, Harbin Institute of Technology (HIT) in 2014.

Jobs

I am looking for motivated Ph.D. students to work with me on learning theory in machine learning or trustworthy machine learning. See Open positions for details.

Due to a large number of requests, I, unfortunately, may not be able to reply to all the emails regarding PhD applications. However, I'll look at applicants that have sent me emails and contact you soon if your enquires are indeed inline with my research.

Research Interests

I'm generally interested in mathematical foundations of machine learning, e.g., statistical learning theory and deep learning theory. My research interest starts from kernel methods to large-scale computational methodologies in algorithm, and mainly focuses on theoretically understanding generalization properties of machine learning based algorithms, especially on over-parameterized models (motivated by neural networks). My research line can be understood from the perspective of function space, from RKHS to hyper-RKHS, and Barron spaces. Besides, I also work on trustworthy machine learning, both theoretically and empirically. My research (the past, ongoing and future) focuses on the following directions:

  • machine learning theory: what is the largest function space that can be learned by neural networks, both statistically and computationally efficiently (computational-statistical gaps)?

  • fine-tuning: expand the frontiers of empirical and theoretical knowledge on when and where to fine-tune, and how much we can fine-tune, precisely, efficiently, and robustly

  • trustworthy machine learning

News

  • [2024-08] I will serve as an Area Chair of AAMAS 2025, ICLR 2025, AISTATS 2025.

  • [2024-05] Two papers were accepted by ICML 2024: one is about high dimensional kernel methods under distribution shift; one is about adversarial attack on foundation models.

  • [2024-04] One paper was accepted by JMLR on the separation between kernel methods and neural networks from the perspective of function space.

  • [2023-04] We will give a tutorial entitled Scaling and Reliability Foundations in Machine Learning at 2024 IEEE International Symposium on Information Theory (ISIT) in Athens, Greece at July.

  • [2024-04] Awarded the DAAD AInet Fellowship, which is awarded to outstanding early career researchers. Topic: Safety and Security in AI.

  • [2024-01] Three papers were accepted by ICLR 2024: generalization of ResNets; robust NAS from benchmark to theory; local-linearity for catastrophic overfitting. I will be at Vienna. Feel free to chat.

  • [2023-09] Two papers were accepted by NeurIPS 2023: one is about global convergence of Transformers; the other one is how over-parameterization affects differential privacy. I will be at New Orleans (again and again). Feel free to chat.

  • [2023-04] Two papers were accepted by ICML 2023: one is about function approximation in online RL; the other one is related to benign overfitting. I will be at Hawaii! Feel free to chat.

  • [2023-02] We will give a tutorial entitled Deep learning theory for computer vision at IEEE CVPR 2023 in Vancouver, Canada.

  • [2022-12] We will give a tutorial entitled Neural networks: the good, the bad, the ugly at IEEE ICASSP 2023 in the Greek island of Rhodes.

  • [2022-09] Six papers were accepted by NeurIPS 2022.

  • [2021-10] One paper on double descent of RFF with SGD was posted on arXiv.

  • [2021-10] One paper was accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • [2021-07] One paper was accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • [2021-06] One paper was accepted by Journal of Machine Learning Research.

  • [2021-02] One paper was accepted by Machine Learning.

  • [2021-01] Two papers were accepted by AISTATS 2021.

  • [2020-10] One paper was accepted by Journal of Machine Learning Research.

Visitors