Bandit-Based Multi-Start Strategies for Global Continuous Optimization

P Guo, MC Fu - 2022 Winter Simulation Conference (WSC), 2022 - ieeexplore.ieee.org
2022 Winter Simulation Conference (WSC), 2022ieeexplore.ieee.org
Global continuous optimization problems are often characterized by the existence of multiple
local optima. For minimization problems, to avoid settling in suboptimal local minima,
optimization algorithms can start multiple instances of gradient descent in different initial
positions, known as a multi-start strategy. One key aspect in a multi-start strategy is the
allocation of gradient descent steps as resources to promising instances. We propose new
strategies for allocating computational resources, developed for parallel computing but …
Global continuous optimization problems are often characterized by the existence of multiple local optima. For minimization problems, to avoid settling in suboptimal local minima, optimization algorithms can start multiple instances of gradient descent in different initial positions, known as a multi-start strategy. One key aspect in a multi-start strategy is the allocation of gradient descent steps as resources to promising instances. We propose new strategies for allocating computational resources, developed for parallel computing but applicable in single-processor optimization. Specifically, we formulate multi-start as a Multi-Armed Bandit (MAB) problem, viewing different instances to be searched as different arms to be pulled. We present reward models that make multi-start compatible with existing MAB and Ranking and Selection (R&S) procedures for allocating gradient descent steps. We conduct simulation experiments on synthetic functions in multiple dimensions and find that our allocation strategies outperform other strategies in the literature for deterministic and stochastic functions.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果