Journal Article
The authors introduce a machine-learning framework to warm-start fixed-point optimization algorithms. The authors' architecture consists of a neural network mapping problem parameters to warm starts, followed by a predefined number of fixed-point iterations.
The authors propose two loss functions designed to either minimize the fixed-point residual or the distance to a ground truth solution. In this way, the neural network predicts warm starts with the end-to-end goal of minimizing the downstream loss.
An important feature of the authors' architecture is its flexibility, in that it can predict a warm start for fixed-point algorithms run for any number of steps, without being limited to the number of steps it has been trained on. The authors provide PAC-Bayes generalization bounds on unseen data for common classes of fixed-point operators: contractive, linearly convergent, and averaged.
Applying this framework to well-known applications in control, statistics, and signal processing, the authors observe a significant reduction in the number of iterations and solution time required to solve these problems, through learned warm starts.
Faculty
Assistant Professor of Decision Sciences