better performance than the Adam optimizer adopted in neural networks. In order to support
our proposed AdaSwarm, a novel Exponentially weighted Momentum Particle Swarm
Optimizer (EMPSO), is proposed. The ability of AdaSwarm to tackle optimization problems is
attributed to its capability to perform good gradient approximations. We show that, the
gradient of any function, differentiable or not, can be approximated by using the parameters …