Developing future human-centered smart cities: Critical analysis of smart city security, Data management, and Ethical challenges

K Ahmad, M Maabreh, M Ghaly, K Khan, J Qadir… - Computer Science …, 2022 - Elsevier
As the globally increasing population drives rapid urbanization in various parts of the world,
there is a great need to deliberate on the future of the cities worth living. In particular, as …

Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

AB Arrieta, N Díaz-Rodríguez, J Del Ser, A Bennetot… - Information fusion, 2020 - Elsevier
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if
harnessed appropriately, may deliver the best of expectations over many application sectors …

The false promise of imitating proprietary llms

A Gudibande, E Wallace, C Snell, X Geng, H Liu… - arXiv preprint arXiv …, 2023 - arxiv.org
An emerging method to cheaply improve a weaker language model is to finetune it on
outputs from a stronger model, such as a proprietary system like ChatGPT (eg, Alpaca, Self …

[HTML][HTML] Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities

W Saeed, C Omlin - Knowledge-Based Systems, 2023 - Elsevier
The past decade has seen significant progress in artificial intelligence (AI), which has
resulted in algorithms being adopted for resolving a variety of problems. However, this …

Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment

Y Liu, Y Yao, JF Ton, X Zhang, RGH Cheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …

Protecting language generation models via invisible watermarking

X Zhao, YX Wang, L Li - International Conference on …, 2023 - proceedings.mlr.press
Abstract Language generation models have been an increasingly powerful enabler to many
applications. Many such models offer free or affordable API access which makes them …

A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning

Z Wang, J Zhai, S Ma - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Deep neural networks are vulnerable to Trojan attacks. Existing attacks use visible patterns
(eg, a patch or image transformations) as triggers, which are vulnerable to human …

APMSA: Adversarial perturbation against model stealing attacks

J Zhang, S Peng, Y Gao, Z Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a Deep Learning (DL) model requires proprietary data and computing-intensive
resources. To recoup their training costs, a model provider can monetize DL models through …

High accuracy and high fidelity extraction of neural networks

M Jagielski, N Carlini, D Berthelot, A Kurakin… - 29th USENIX security …, 2020 - usenix.org
In a model extraction attack, an adversary steals a copy of a remotely deployed machine
learning model, given oracle prediction access. We taxonomize model extraction attacks …