With the change in technology at such a fast pace, every piece of information is available on the Internet. But certain sections of society like deaf-mute persons are still struggling for their communication rights. They use countenance, kinesics, and gestures to communicate with each other. Further, due to the paucity of linguistically annotated and documented material on Sign Language, research on grammatical and phonetic aspects of the language is limited. Despite the well-established purposes of Sign Language, the use of technology to facilitate deaf persons is under-explored in research studies. Consequently, the design of an efficient and automatic Indian Sign Language Translation System (ISLTS) is essential. This paper proposes a deep learning-based seven-layered two-dimensional Convolution Neural Network (2D-CNN) for efficient translation of Indian sign language digits using the swish activation function. The proposed framework uses max pooling, batch normalization, dropout regularization, and Adam optimizer. An open-access numeric customized dataset of India Sign language of approximately 12 K images has been utilized. Our model achieves the highest validation accuracy of 99.55% and average validation accuracy of 99.22%. The results show that the swish activation function outperforms traditional ReLU and Leaky ReLU activation functions.