Abstract
The prevalence of uncertainty in models of engineering and the natural sciences necessitates the inclusion of random parameters in the underlying partial differential equations (PDEs). The resulting decision problems governed by the solution of such random PDEs are infinite dimensional stochastic optimization problems. In order to obtain risk-averse optimal decisions in the face of such uncertainty, it is common to employ risk measures in the objective function. This leads to risk-averse PDE-constrained optimization problems. We propose a method for solving such problems in which the risk measures are convex combinations of the mean and conditional value-at-risk (CVaR). Since these risk measures can be evaluated by solving a related inequality-constrained optimization problem, we suggest a log-barrier technique to approximate the risk measure. This leads to a new continuously differentiable convex risk measure: the log-barrier risk measure. We show that the log-barrier risk measure fits into the setting of optimized certainty equivalents of Ben-Tal and Teboulle and the expectation quadrangle of Rockafellar and Uryasev. Using the differentiability of the log-barrier risk measure, we derive first-order optimality conditions reminiscent of classical primal and primal-dual interior-point approaches in nonlinear programming. We derive the associated Newton system, propose a reduced symmetric system to calculate the steps, and provide a sufficient condition for local superlinear convergence in the continuous setting. Furthermore, we provide a \Gamma -convergence result for the log-barrier risk measures to prove convergence of the minimizers to the original nonsmooth problem. The results are illustrated by a numerical study.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 1-29 |
Seitenumfang | 29 |
Fachzeitschrift | SIAM Journal on Optimization |
Jahrgang | 31 |
Ausgabenummer | 1 |
DOIs | |
Publikationsstatus | Veröffentlicht - Feb. 2021 |