Publications
For the complete list, please refer to my Google Scholar Profile.
* indicates equal contribution
Benchmark
The adversarially trained NAS benchmark (NAS-RobBench-201) in our ICLR24 paper was released! See [project website] for details.
Preprints
Can overfitted deep neural networks in adversarial training generalize? – An approximation viewpoint. [arXiv].
Zhongjie Shi, Fanghui Liu, Yuan Cao, Johan A.K. Suykens.
Benign overfitting in Fixed Dimension via Physics-Informed Learning with Smooth Inductive Bias. [arXiv].
Honam Wang, Wendao Wu, Fanghui Liu, Yiping Lu
Accepted Papers
Learning with norm constrained, over-parameterised, two-layer neural networks. [arXiv]
Fanghui Liu, Leello Dadi, and Volkan Cevher.
Journal of Machine Learning Research (JMLR), 2024.
[TLDR: We provide the "best" trade-off between epsilon-covering and the input dimension in metric entropy. Our results can be also extended to the interpolation setting.]
Scalable Learned Model Soup on a Single GPU: An Efficient Subspace Training Strategy. [arXiv], [code]
Tao Li*, Weisen Jiang*, Fanghui Liu, Xiaolin Huang, James Kwok.
European Conference on Computer Vision (ECCV), 2024.
High-dimensional kernel methods under covariate shift: data-dependent implicit regularization. [arXiv]
Yihang Chen, Fanghui Liu, Taiji Suzuki, Volkan Cevher.
in the 41st International Conference on Machine Learning (ICML), 2024.
Revisiting character-level adversarial attacks for language models. [paper], [code].
Elias Abad Rocamora, Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher.
in the 41st International Conference on Machine Learning (ICML), 2024.
Presented at ICLR 2024 Workshop on Secure and Trustworthy Large Language Models
[TLDR: We introduce an efficient algorithm for character-level attack and typo-corrector doesn't work!]
Generalization of Deep ResNets in the mean-field regime. [link].
Yihang Chen, Fanghui Liu, Yiping Lu, Grigorios Chrysos, Volkan Cevher.
in the 12th International Conference on Learning Representations (ICLR), 2024. [Spotlight]
Robust NAS benchmark under adversarial training: assessment, theory, and beyond. [paper], [project website].
Yongtao Wu, Fanghui Liu, Carl-Johann Simon-Gabriel, Grigorios Chrysos, Volkan Cevher.
in the 12th International Conference on Learning Representations (ICLR), 2024.
Efficient local linearity regularization to overcome catastrophic overfitting. [paper], [code].
Elias Abad Rocamora, Fanghui Liu, Grigorios Chrysos, Pablo M. Olmos, Volkan Cevher.
in the 12th International Conference on Learning Representations (ICLR), 2024.
On the convergence of encoder-only shallow Transformers. [arxiv].
Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher.
in the 37th Conference on Neural Information Processing Systems (NeurIPS), 2023.
Initialization matters: Privacy-utility analysis of overparameterized neural networks. [arXiv].
Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, Volkan Cevher.
in the 37th Conference on Neural Information Processing Systems (NeurIPS), 2023.
What can online reinforcement learning with function approximation benefit from general coverage conditions?. [arXiv].
Fanghui Liu, Luca Viano, Volkan Cevher.
in the 40th International Conference on Machine Learning (ICML), 2023.
Benign Overfitting in Deep Neural Networks under Lazy Training. [arXiv].
Zhenyu Zhu, Fanghui Liu, Grigorios Chrysos, Francesco Locatello, Volkan Cevher.
in the 40th International Conference on Machine Learning (ICML), 2023.
On the double descent of random features models trained by SGD. [arXiv], [code], [slides].
Fanghui Liu, Johan A.K. Suykens, Volkan Cevher.
in the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
Presented at Workshop on the Theory of Overparameterized Machine Learning (TOPML) 2022.
[TLDR: We study the double descent, interplay with the data, parameter, and compute budget (scaling law), allowing for obtaining dimension-free results.]
Understanding deep neural function approximation in reinforcement learning via \(\epsilon\)-greedy exploration. [arXiv].
Fanghui Liu, Luca Viano, Volkan Cevher.
in the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
[TLDR: This is the attempt for nonlinear function approximation in online RL via neural networks beyond lazy training.]
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization). [arXiv], [slides].
Zhenyu Zhu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher.
in the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
[We aim to close the gap on the question: will over-parameterisation help or hurt robustness?]
Generalization properties of NAS under activation and skip connection search. [arXiv].
Zhenyu Zhu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher.
in the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
Extrapolation and spectral bias of neural nets with Hadamard product: a polynomial net study. [arXiv].
Yongtao Wu, Zhenyu Zhu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher.
in the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
Sound and complete verification of polynomial networks. [arXiv].
Elias Abad Rocamora, Mehmet Fatih Sahin, Fanghui Liu, Grigorios Chrysos, Volkan Cevher.
in the 36th Conference on Neural Information Processing Systems (NeurIPS), 2022.
Random features for kernel approximation: A Survey on algorithms, theory, and beyond. [arXiv], [code].
Fanghui Liu, Xiaolin Huang, Yudong Chen, and Johan A.K. Suykens.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.
[TLDR: This is a comprehensive survey summarising random features from algorithm to theory. The over-parameterisation part does not involve too much.]
Generalization properties of hyper-RKHS and its applications. [arxiv], [link], [code].
Fanghui Liu*, Lei Shi*, Xiaolin Huang, Jie Yang, and Johan A.K. Suykens.
Journal of Machine Learning Research (JMLR), 2021.
[TLDR: This work provides analysis on learning beyond RKHS with non-trivial concentration inequality for dependence.]
Towards a unified quadrature framework for large scale kernel methods. [arXiv], [code].
Fanghui Liu, Xiaolin Huang, Yudong Chen, and Johan A.K. Suykens.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.
Kernel regression in high dimensions: Refined analysis beyond double descent. [link], [code], [slides].
Fanghui Liu, Zhenyu Liao, and Johan A.K. Suykens.
in the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), 2021.
[TLDR: This work extends the double descent theory.]
Fast learning in reproducing kernel Krein spaces via signed measures. [link], [poster], [code].
Fanghui Liu, Xiaolin Huang, Yingyi Chen, and Johan A.K. Suykens.
in the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), 2021.
Analysis of least squares regularized regression in reproducing kernel Krein spaces. [arXiv].
Fanghui Liu*, Lei Shi*, Xiaolin Huang, Jie Yang, and Johan A.K. Suykens.
Machine Learning, 2021.
[TLDR: This work develop new proof framework to handle non-positive kernel in learning theory.]
Learning data-adaptive nonparametric kernels. [link] [code].
Fanghui Liu, Xiaolin Huang, Chen Gong, Jie Yang, and Li Li.
Journal of Machine Learning Research (JMLR), 2020.
Random Fourier features via fast surrogate leverage weighted sampling. [arXiv], [code].
Fanghui Liu, Xiaolin Huang, Yudong Chen, Jie Yang, and Johan A.K. Suykens.
in the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), 2020.
[TLDR: This work presents an efficient algorithm of leverage score for large-scale kernel approximation, motived by kernel alignment.]