With the increasing availability of affordable data services and social media presence, our life is not untouched with ‘cyber,’ i.e., electronic technology. With it, various challenges and issues are faced, and the most sensitive among them is Cyberbullying. Cyberbullying is the form of ‘abusive,’ ‘offensive,’ ‘inappropriate,’ and ‘toxic’ comments that are present on the platforms. With the fear of online abuse and bullying, many people give-up on perceiving different opinions and stop expressing themselves. Nowadays, various online platforms like Quora, Wikipedia, Twitter, and Facebook have become part and parcel of everybody’s life. These stages battle to viably encourage discussions, driving numerous networks to restrict or shutdown client remarks. Unfortunately, online comments with toxicity cause online badgering, bullying, and personal attacks. Therefore, toxic comment classification problem has attracted the attention of many organizations from the past few years. Hence, in this paper, we present a hybrid Deep Learning model that will detect such toxic comments and classify them according to the type of toxicity. As an outcome, we achieved the best results with an accuracy of 98.39% and an f1 score of 79.91%.