Alternating proximal-gradient steps for (stochastic) nonconvex-concave minimax problems

Radu Ioan Bot, Axel Böhm (Corresponding author)

Publications: Contribution to journalArticlePeer Reviewed


Minimax problems of the form minx maxy Ψ (x, y) have attracted increased interest largely due to advances in machine learning, in particular generative adversarial networks and adversarial learning. These are typically trained using variants of stochastic gradient descent for the two players. Although convex-concave problems are well understood with many efficient solution methods to choose from, theoretical guarantees outside of this setting are sometimes lacking even for the simplest algorithms. In particular, this is the case for alternating gradient descent ascent, where the two agents take turns updating their strategies. To partially close this gap in the literature we prove a novel global convergence rate for the stochastic version of this method for finding a critical point of ψ ( ) := maxy Ψ ( , y) in a setting which is not convex-concave.

Original languageEnglish
Pages (from-to)1884 - 1913
Number of pages30
JournalSIAM Journal on Optimization
Issue number3
Publication statusPublished - 2023

Austrian Fields of Science 2012

  • 102019 Machine learning
  • 101016 Optimisation


  • complexity
  • minimax
  • nonconvex-concave
  • prox-gradient method
  • saddle point
  • stochastic gradient descent


Dive into the research topics of 'Alternating proximal-gradient steps for (stochastic) nonconvex-concave minimax problems'. Together they form a unique fingerprint.

Cite this