Banner

About

I am a PhD student in machine learning and optimization in Grenoble. I have the honour of having the amazing trio Franck Iutzeler, Jérôme Malick and Panayotis Mertikopoulos as adviors. More precisely, I am at the LJK lab, which is part of UGA. I had the chance to study at ENS Paris and to graduate from the MVA master.

Contact

I am in charge of the team’s seminar, please get in touch if you would like to present!

Research

My current interest are robust optimization, non-convex stochastic optimization and deep learning optimizers.

See arXiv, Google Scholar, DBLP, Github and a resume.

Stochastic optimization in deep learning

In our latest work, we seek to answer a simple question: what is the asymptotic distribution of SGD on general non-convex objectives? Leveraging large deviation theory, we obtain a description of the invariant measure of SGD (ICML 2024, poster). This work was presented at Thoth seminar (slides) and at the Séminaire de Statistique of the LPSM lab in Paris (slides).

Wasserstein Distributionnally Robust Optimization

Inspired by the success of entropic regularization in optimal transport, we study the regularization of WDRO (ESAIM COCV). We also show that these estimators enjoy attractive generalization guarantees (NeurIPS 23, slides).

I presented early versions of these works at a workshop in Erice in May 2022, (slides), and the second part at FOCM 2023 in Paris, (poster) as well at Neurips@Paris 2023 (slides).

Last-iterate convergence of mirror methods

We characterize the last iterate convergence rate of mirror methods in variational inequalities as a function of the local geometry of the Bregman divergence near the solution, both in the deterministic (to be published in SIOPT) and stochastic settings (COLT 21).

The latter was presented at COLT 21 (slides, poster) and at ICCOPT 22 (slides) while the former was presented at SMAI MODE 2024 (slides).

Graph Neural Networks

With Marc Lelarge, we precisely describe the approximatyion cabapilities of invariant and equivariant graph neural networks (ICLR 21). It was presented at a MIPT-UGA workshop and at the Thoth team seminar (slides).

Smooth game optimization for Machine Learning

With Gauthier Gidel, Ioannis Mitliagkas and Simon Lacoste-Julien, we propose a tight and unified analysis of gradient-based methods in games (AISTATS 20, slides) and leverage matrix iteration theory to study accelerated methods in games (AISTATS 20).

Teaching

2023-2024

2022-2023