Abstract
In this work we aim to solve a convex-concave saddle point problem, where the convex-concave coupling function is smooth in one variable and nonsmooth in the other and not assumed to be linear in either. The problem is augmented by a nonsmooth regulariser in the smooth component. We propose and investigate a novel algorithm under the name of OGAProx, consisting of an optimistic gradient ascent step in the smooth variable coupled with a proximal step of the regulariser, and which is alternated with a proximal step in the nonsmooth component of the coupling function. We consider the situations convex-concave, convex-strongly concave and strongly convex-strongly concave related to the saddle point problem under investigation. Regarding iterates we obtain (weak) convergence, a convergence rate of order O(1/K ) and linear convergence like O(๐^K ) with ๐ < 1, respectively. In terms of function values we obtain ergodic convergence rates of order O(1/K), O(1/K^2) and O(๐^K) with ๐ < 1, respectively. We validate our theoretical considerations on a nonsmooth-linear saddle point problem, the training of multi kernel support vector machines and a classification problem incorporating minimax group fairness.
Original language | English |
---|---|
Pages (from-to) | 925-966 |
Number of pages | 42 |
Journal | Computational Optimization and Applications |
Volume | 86 |
Issue number | 3 |
Early online date | 4 Jun 2022 |
DOIs | |
Publication status | Published - 2023 |
Austrian Fields of Science 2012
- 101016 Optimisation
Keywords
- Acceleration
- Convergence rate
- Convex-concave
- Linear convergence
- Minimax algorithm
- Saddle point problem