Optimizing echo state network through a novel fisher maximization based stochastic gradient descent

MM Öztürk, İA Cankaya, D İpekçi - Neurocomputing, 2020 - Elsevier
Neurocomputing, 2020Elsevier
Hyperparameter optimization is a challenging process that has the potential to improve
machine learning algorithms. Since it creates a remarkable computational burden for
machine learning tasks, there have been few works coping with tuning strategies of a
specific algorithm. In this paper, an improved Stochastic Gradient Descent (SGD) based on
Fisher Maximization is developed for tuning hyperparameters of an Echo State Network
(ESN) which has a wide range of applications. The results of the method are then compared …
Abstract
Hyperparameter optimization is a challenging process that has the potential to improve machine learning algorithms. Since it creates a remarkable computational burden for machine learning tasks, there have been few works coping with tuning strategies of a specific algorithm. In this paper, an improved Stochastic Gradient Descent (SGD) based on Fisher Maximization is developed for tuning hyperparameters of an Echo State Network (ESN) which has a wide range of applications. The results of the method are then compared with those of traditional Gradient Descent and Grid Search. According to the obtained results; 1) The scale of the data sets greatly affects the reliability of hyperparameter optimization results; 2) Feature selection is critical in terms of mean error of training when hyperparameter optimization is applied on some methods such as ESN; 3) SGD falls in a good local minima if Fisher Maximization is performed to find a good starting point.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果