With the recent advancements in the field of explainable artificial intelligence (XAI), which covers the domain of turning deep network architectures from black boxes to comprehensible structures, it became easy to understand what goes on inside a network when it predicts an output. Many researchers have successfully shown the ‘thought process’ behind a network’s decision making. This rich and interesting information has not been utilized beyond the scope of visualizations once the training finish. In this work, a novel idea to utilize this insight to the network as a training parameter is proposed. Layer-wise Relevance Propagation (LRP), which obtains the effect of each neuron towards the output of the whole network, is used as a parameter, along with learning rate and network weights, to optimize the training. Various intuitive formulations have been proposed, and the results of the experiments on MNIST and CIFAR-10 datasets have been reported in this paper. Our proposed methodologies show better or comparable performances against conventional optimization algorithms. This would open a new dimension of research to explore the possibility of using XAI in optimizing the training of neural networks.