Abstract

Many modern optimization problems involve in the objective function solution mappings or optimal-value functions of other optimization problems. In most/many cases, those solution mappings and optimal-value functions are non-smooth, and the optimal-value function is also possibly non-convex (even if the defining data is smooth and convex). Moreover, stemming from solving optimization problems, those solution mappings and value-functions are usually not known explicitly, via any closed formulas. Hence, there is no formula to differentiate (even in the sense of generalized derivatives). This presents an obvious challenge for solving the “upper” optimization problem, as derivatives therein cannot be computed.

We present an approach to regularize and approximate solution mappings of parametric convex optimization problems that combines interior penalty (log-barrier) with Tikhonov regularization. Because the regularized solution mappings are single-valued and smooth under reasonable conditions, they can also be used to build a computationally practical smoothing for the associated optimal-value function.

One motivating application of interest is two-stage (possibly nonconvex) stochastic programming. We show that our approach, being computationally implementable, provides locally bounded upper bounds for the subdifferential of the optimal-value function of qualified convex problems. As a by-product of our development, we also recover that in the given setting the value function is locally Lipschitz continuous. Numerical experiments are presented for two-stage convex stochastic programming problems, comparing the approach with the bundle method for nonsmooth optimization.

Another application is a certain class of hierarchical decision problems that can be viewed as single-leader multi-follower games. The objective function of the leader involves the decisions of the followers (agents), which are taken independently by solving their own convex optimization problems. One traditional approach in this setting is via mixed complementarity formulations. However, this approach can become impractical when the numbers of agents and/or scenarios (in the stochastic setting) become large. We show how our approach is applicable to derive both agent-wise and scenario-wise decomposition algorithms. Numerical experiments and some comparisons with the complementarity solver PATH are shown for the two-stage stochastic Walrasian equilibrium problem.

Contributors/co-authors: Pedro Borges, Claudia Sagastizábal