[HTML][HTML] Exploring misclassifications of robust neural networks to enhance adversarial attacks

L Schwinn, R Raab, A Nguyen, D Zanca, B Eskofier - Applied Intelligence, 2023 - Springer
Progress in making neural networks more robust against adversarial attacks is mostly
marginal, despite the great efforts of the research community. Moreover, the robustness …

One less reason for filter pruning: Gaining free adversarial robustness with structured grouped kernel pruning

SH Zhong, Z You, J Zhang, S Zhao… - Advances in …, 2023 - proceedings.neurips.cc
Densely structured pruning methods utilizing simple pruning heuristics can deliver
immediate compression and acceleration benefits with acceptable benign performances …

From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings

F Juraev, M Abuhamad, E Chan-Tin… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep Learning (DL) is rapidly maturing to the point that it can be used in safety-and security-
crucial applications. However, adversarial samples, which are undetectable to the human …

Detection, Quantification, and Mitigation of Robustness Vulnerabilities in Deep Neural Networks

L Schwinn - 2023 - search.proquest.com
Abstract Machine learning (ML) has made enormous progress in the last two decades.
Specifically, Deep Neural Networks (DNNs) have led to several breakthroughs. The …

[引用][C] Detektion, Quantifikation und Mitigation von Robustheitsanfälligkeiten in Tiefen Neuronalen Netzen

L Schwinn - 2023 - Dissertation, Erlangen, Friedrich …