A gradient descent akin method for constrained optimization: algorithms and applications

Long Chen, Kai Uwe Bletzinger, Nicolas R. Gauger, Yinyu Ye

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


We present a ‘gradient descent akin’ method (GDAM) for constrained optimization problem, i.e. the search direction is computed using a linear combination of the negative and normalized objective and constraint gradient. We give fundamental theoretical guarantees on the global convergence of the method. This work focuses on the algorithms and applications of GDAM. We present computational algorithms that adapt common strategies for the gradient descent method. We demonstrate the potential of the method using two engineering applications, shape optimization and sensor network localization. When practically implemented, GDAM is robust and very competitive in solving the considered large and challenging optimization problems.

Original languageEnglish
JournalOptimization Methods and Software
StateAccepted/In press - 2024


  • 65K05
  • 90C22
  • 90C30
  • 90C51
  • 90C90
  • Negative and normalized gradients
  • gradient descent
  • inequality constrained optimization
  • interior-point method
  • sensor network localization
  • shape optimization


Dive into the research topics of 'A gradient descent akin method for constrained optimization: algorithms and applications'. Together they form a unique fingerprint.

Cite this