## Abstract

Minimax problems have attracted increased interest largely due to advances in machine learning, in particular generative adversarial networks. These are trained using variants of stochastic gradient descent for the two players.

Although convex-concave problems are well understood with many efficient solution methods to choose from, theoretical guarantees in the nonconvex setting are sometimes lacking even for the simplest algorithms.

To partially close this gap we prove the first global convergence rate for stochastic alternating gradient descent ascent in a setting which is not convex-concave.

Although convex-concave problems are well understood with many efficient solution methods to choose from, theoretical guarantees in the nonconvex setting are sometimes lacking even for the simplest algorithms.

To partially close this gap we prove the first global convergence rate for stochastic alternating gradient descent ascent in a setting which is not convex-concave.

Original language | English |
---|---|

Publisher | arXiv.org |

Publication status | Published - 27 Jul 2020 |

## Austrian Fields of Science 2012

- 101016 Optimisation
- 101017 Game theory