Building upon recent works on linesearch-free adaptive proximal gradientmethods, this paper proposes adaPG$^{q,r}$, a framework that unifies andextends existing results by providing larger stepsize policies and improvedlower bounds. Different choices of the parameters $q$ and $r$ are discussed andthe efficacy of the resulting methods is demonstrated through numericalsimulations. In an attempt to better understand the underlying theory, itsconvergence is established in a more general setting that allows fortime-varying parameters. Finally, an adaptive alternating minimizationalgorithm is presented by exploring the dual setting. This algorithm not onlyincorporates additional adaptivity, but also expands its applicability beyondstandard strongly convex settings.
On the convergence of adaptive first order methods: proximal gradient and alternating minimization algorithms
Puya Latafat
;
2024-01-01
Abstract
Building upon recent works on linesearch-free adaptive proximal gradientmethods, this paper proposes adaPG$^{q,r}$, a framework that unifies andextends existing results by providing larger stepsize policies and improvedlower bounds. Different choices of the parameters $q$ and $r$ are discussed andthe efficacy of the resulting methods is demonstrated through numericalsimulations. In an attempt to better understand the underlying theory, itsconvergence is established in a more general setting that allows fortime-varying parameters. Finally, an adaptive alternating minimizationalgorithm is presented by exploring the dual setting. This algorithm not onlyincorporates additional adaptivity, but also expands its applicability beyondstandard strongly convex settings.File | Dimensione | Formato | |
---|---|---|---|
2311.18431v2.pdf
accesso aperto
Tipologia:
Documento in Post-print
Licenza:
Creative commons
Dimensione
624.09 kB
Formato
Adobe PDF
|
624.09 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.