Waïss Azizian

PhD student in machine learning and optimization at Université Grenoble Alpes

waiss-profile-square.jpg

Office 143 · LJK lab (IMAG)

Université Grenoble Alpes

Grenoble, France

I am a final-year PhD student in machine learning and optimization at the LJK lab within Université Grenoble Alpes. I am fortunate to be advised by Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. Before starting my PhD, I studied at ENS Paris and graduated from the MVA master.

The aim of my research is two-fold: (i) advancing our understanding of the intricate phenomena at play in deep learning, using tools from optimization, dynamical systems, probability and statistics; (ii) leveraging this knowledge to deliver more reliable and efficient machine learning systems.

Keywords: stochastic optimization, deep learning, reliable ML, LLMs

Resume: PDFwebpage

contact

research

My research spans four main areas that contribute to a principled understanding of deep learning systems and their optimization dynamics. You can find a list of my publications on the publications page. Here are some of my main research projects:

You can browse my research activity on arXiv, Google Scholar, DBLP, GitHub, and LinkedIn.

news

Dec 15, 2025 Presented our work on the lon-run behaviour of SGD on non-convex landscapes to the Inria Argo team in Paris (slides).
Dec 10, 2025 Delivered an invited seminar on stochastic optimization in deep learning at Morgan Stanley Machine Learning Research, New York (slides).
Oct 14, 2025 Wrapped up my internship at Morgan Stanley ML Research, New York, where I investigated in-context learning capabilities of LLMs, see our preprint.
May 31, 2025 Completed my PhD internship at Apple Machine Learning Research in Paris, working on uncertainty quantification methods for Large Language Models in Marco Cuturi’s team, see our prepint.
Apr 20, 2025 Our paper “The global convergence time of stochastic gradient descent in non-convex landscapes” was accepted at ICML 2025! Preprint available.

publications

2025

  1. How does the pretraining distribution shape in-context learning? task selection, generalization, and robustness
    Waïss Azizian and Ali Hasan
    arXiv: 2510.01163, 2025
  2. The geometries of truth are orthogonal across tasks
    Waïss Azizian, Michael Kirchhof, Eugene Ndiaye, and 4 more authors
    In ICML 2025 Workshop on Reliable and Responsible Foundation Models, 2025
  3. The global convergence time of stochastic gradient descent in non-convex landscapes: sharp estimates via large deviations
    Waïss Azizian, Franck Iutzeler, Jerome Malick, and 1 more author
    In ICML, 2025
  4. Almost sure convergence of stochastic gradient methods under gradient domination
    Simon Weissmann, Sara Klein, Waïss Azizian, and 1 more author
    Transactions on Machine Learning Research, 2025

2024

  1. The rate of convergence of bregman proximal methods: local geometry versus regularity versus sharpness
    Waı̈ss Azizian, Franck Iutzeler, Jérôme Malick, and 1 more author
    SIAM Journal on Optimization, 2024
  2. What is the long-run distribution of stochastic gradient descent? A large deviations analysis
    Waïss Azizian, Franck Iutzeler, Jerome Malick, and 1 more author
    In ICML, 2024
  3. skwdro: a library for Wasserstein distributionally robust machine learning
    Florian Vincent, Waïss Azizian, Franck Iutzeler, and 1 more author
    arXiv: 2410.21231, 2024

2023

  1. Regularization for Wasserstein distributionally robust optimization
    Waïss Azizian, Franck Iutzeler, and Jérôme Malick
    ESAIM: Control, Optimisation and Calculus of Variations, 2023
  2. Exact generalization guarantees for (regularized) Wasserstein distributionally robust models
    Waïss Azizian, Franck Iutzeler, and Jérôme Malick
    In NeurIPS, 2023
  3. Automatic Rao-Blackwellization for sequential Monte Carlo with belief propagation
    Waïss Azizian, Guillaume Baudart, and Marc Lelarge
    In ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling, 2023

2021

  1. Expressive power of invariant and equivariant graph neural networks
    Waiss Azizian and Marc Lelarge
    In ICLR , 2021
  2. The last-iterate convergence rate of optimistic mirror descent in stochastic variational inequalities
    Waïss Azizian, Franck Iutzeler, Jérôme Malick, and 1 more author
    In COLT, 2021

2020

  1. Accelerating smooth games by manipulating spectral shapes
    Waïss Azizian, Damien Scieur, Ioannis Mitliagkas, and 2 more authors
    In AISTATS, 2020
  2. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games
    Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and 1 more author
    In AISTATS, 2020
  3. Linear lower bounds and conditioning of differentiable games
    Adam Ibrahim, Waïss Azizian, Gauthier Gidel, and 1 more author
    In ICML, 2020