Adversarial attacks on genotype sequences

DM Montserrat, AG Ioannidis - ICASSP 2023-2023 IEEE …, 2023 - ieeexplore.ieee.org
Adversarial attacks can drastically change the output of a method by small alterations to its
input. While this can be a useful framework to analyze worst-case robustness, it can also be …

Mean Aggregator Is More Robust Than Robust Aggregators Under Label Poisoning Attacks

J Peng, W Li, Q Ling - arXiv preprint arXiv:2404.13647, 2024 - arxiv.org
Robustness to malicious attacks is of paramount importance for distributed learning. Existing
works often consider the classical Byzantine attacks model, which assumes that some …

Defense Frameworks Against Adversarial Attacks on Deep Learning Models

Z He - 2024 - search.proquest.com
Deep learning has made remarkable progress over the past decade across various fields,
such as Computer Vision, Natural Language Processing (NLP), and Speech Recognition …