[HTML][HTML] Deep Optimisation: Transitioning the Scale of Evolutionary Search by Inducing and Searching in Deep Representations

J Caldwell, J Knowles, C Thies, F Kubacki… - SN Computer …, 2022 - Springer
SN Computer Science, 2022Springer
We investigate the optimisation capabilities of an algorithm inspired by the Evolutionary
Transitions in Individuality. In these transitions, the natural evolutionary process is
repeatedly rescaled through successive levels of biological organisation. Each transition
creates new higher-level evolutionary units that combine multiple units from the level below.
We call the algorithm Deep Optimisation (DO) to recognise both its use of deep learning
methods and the multi-level rescaling of biological evolutionary processes. The evolutionary …
Abstract
We investigate the optimisation capabilities of an algorithm inspired by the Evolutionary Transitions in Individuality. In these transitions, the natural evolutionary process is repeatedly rescaled through successive levels of biological organisation. Each transition creates new higher-level evolutionary units that combine multiple units from the level below. We call the algorithm Deep Optimisation (DO) to recognise both its use of deep learning methods and the multi-level rescaling of biological evolutionary processes. The evolutionary model used in DO is a simple hill-climber, but, as higher-level representations are learned, the hill-climbing process is repeatedly rescaled to operate in successively higher-level representations. The transition process is based on a deep learning neural network (NN), specifically a deep auto-encoder. Our experiments with DO start with a study using the NP-hard problem, multiple knapsack (MKP). Comparing with state-of-the-art model-building optimisation algorithms (MBOAs), we show that DO finds better solutions to MKP instances and does so without using a problem-specific repair operator. A second, much more in-depth investigation uses a class of configurable problems to understand more precisely the distinct problem characteristics that DO can solve that other MBOAs cannot. Specifically, we observe a polynomial vs exponential scaling distinction where DO is the only algorithm to show polynomial scaling for all problems. We also demonstrate that some problem characteristics need a deep network in DO. In sum, our findings suggest that the use of deep learning principles have significant untapped potential in combinatorial optimisation. Moreover, we argue that natural evolution could be implementing something like DO, and the evolutionary transitions in individuality are the observable result.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果