Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Developing future human-centered smart cities: Critical analysis of smart city security, Data management, and Ethical challenges

K Ahmad, M Maabreh, M Ghaly, K Khan, J Qadir… - Computer Science …, 2022 - Elsevier
As the globally increasing population drives rapid urbanization in various parts of the world,
there is a great need to deliberate on the future of the cities worth living. In particular, as …

The false promise of imitating proprietary llms

A Gudibande, E Wallace, C Snell, X Geng, H Liu… - arXiv preprint arXiv …, 2023 - arxiv.org
An emerging method to cheaply improve a weaker language model is to finetune it on
outputs from a stronger model, such as a proprietary system like ChatGPT (eg, Alpaca, Self …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

APMSA: Adversarial perturbation against model stealing attacks

J Zhang, S Peng, Y Gao, Z Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a Deep Learning (DL) model requires proprietary data and computing-intensive
resources. To recoup their training costs, a model provider can monetize DL models through …

Towards data-free model stealing in a hard label setting

S Sanyal, S Addepalli, RV Babu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …

Stealing links from graph neural networks

X He, J Jia, M Backes, NZ Gong, Y Zhang - 30th USENIX security …, 2021 - usenix.org
Graph data, such as chemical networks and social networks, may be deemed
confidential/private because the data owner often spends lots of resources collecting the …

Maze: Data-free model stealing attack using zeroth-order gradient estimation

S Kariyappa, A Prakash… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Abstract High quality Machine Learning (ML) models are often considered valuable
intellectual property by companies. Model Stealing (MS) attacks allow an adversary with …

Thieves on sesame street! model extraction of bert-based apis

K Krishna, GS Tomar, AP Parikh, N Papernot… - arXiv preprint arXiv …, 2019 - arxiv.org
We study the problem of model extraction in natural language processing, in which an
adversary with only query access to a victim model attempts to reconstruct a local copy of …