continuously interpolates between logarithmic, linear, and exponential functions. This
activation function is simple, differentiable, and parameterized so that it can be trained as the
rest of the network is trained. We hypothesize that soft exponential has the potential to
improve neural network learning, as it can exactly calculate many natural operations that
typical neural networks can only approximate, including addition, multiplication, inner …