Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate …
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike …
B Rasheed, A Khan - Russian Law Journal, 2023 - cyberleninka.ru
Deep learning models have been found to be susceptible to adversarial attacks, which limits their use in security-sensitive applications. One way to enhance the resilience of these …
M Shah, K Gandhi, S Joshi, MD Nagar, V Patel… - … on Advanced Computing …, 2023 - Springer
Abstract Capsule Networks (CapsNets) have gained significant attention in recent years due to their potential for improved representation learning and robustness. However, their …