A Weighted Mirror Descent Algorithm for Nonsmooth Convex Optimization Problem

Duy V.N. Luong, Panos Parpas, Daniel Rueckert, Berç Rustem

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Large-scale nonsmooth convex optimization is a common problem for a range of computational areas including machine learning and computer vision. Problems in these areas contain special domain structures and characteristics. Special treatment of such problem domains, exploiting their structures, can significantly reduce the computational burden. In this paper, we consider a Mirror Descent method with a special choice of distance function for solving nonsmooth optimization problems over a Cartesian product of convex sets. We propose to use a nonlinear weighted distance in the projection step. The convergence analysis identifies optimal weighting parameters that, eventually, lead to the optimally weighted step-size strategy for every projection on a corresponding convex set. We show that the optimality bound of the Mirror Descent algorithm using the weighted distance is either an improvement to, or in the worst case as good as, the optimality bound of the Mirror Descent using unweighted distances. We demonstrate the efficiency of the algorithm by solving the Markov Random Fields optimization problem. In order to exploit the domain of the problem, we use a weighted log-entropy distance and a weighted Euclidean distance. Promising experimental results demonstrate the effectiveness of the proposed method.

Original languageEnglish
Pages (from-to)900-915
Number of pages16
JournalJournal of Optimization Theory and Applications
Volume170
Issue number3
DOIs
StatePublished - 1 Sep 2016
Externally publishedYes

Keywords

  • Markov Random Fields
  • Mirror Descent
  • Subgradient projection
  • Weighted distance

Fingerprint

Dive into the research topics of 'A Weighted Mirror Descent Algorithm for Nonsmooth Convex Optimization Problem'. Together they form a unique fingerprint.

Cite this