Last-iterate convergence of mirror methods
Determining how Bregman geometry impacts last-iterate guarantees in variational inequalities.
Multi-agent formulations have delivered critical advances across deep learning, from generative modeling to reinforcement learning, as well as robust optimization problems such as adversarial training. These successes have renewed interest in the behavior of first-order methods for solving differentiable multi-player games, which are notoriously more challenging than single-objective optimization.
We characterize the last-iterate convergence rate of mirror methods in variational inequalities as a function of the local geometry near the solution, both in deterministic (to be published in SIOPT) and stochastic settings (COLT 21). Our work shows how the design of the algorithm—regularization and step-size schedule—interacts with the geometry of constraints to determine convergence properties of these methods.
Talks: COLT 2021 (slides, poster), ICCOPT 2022 (slides), and SMAI MODE 2024 (slides).