[HTML][HTML] Machine learning in python: Main developments and technology trends in data science, machine learning, and artificial intelligence

S Raschka, J Patterson, C Nolet - Information, 2020 - mdpi.com
Smarter applications are making better use of the insights gleaned from data, having an
impact on every industry and research discipline. At the core of this revolution lies the tools …

Deep convolutional neural networks for image classification: A comprehensive review

W Rawat, Z Wang - Neural computation, 2017 - ieeexplore.ieee.org
Convolutional neural networks (CNNs) have been applied to visual tasks since the late
1980s. However, despite a few scattered applications, they were dormant until the mid …

Glaze: Protecting artists from style mimicry by {Text-to-Image} models

S Shan, J Cryan, E Wenger, H Zheng… - 32nd USENIX Security …, 2023 - usenix.org
Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to
displace many in the professional artist community. In particular, models can learn to mimic …

On adaptive attacks to adversarial example defenses

F Tramer, N Carlini, W Brendel… - Advances in neural …, 2020 - proceedings.neurips.cc
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to
adversarial examples. We find, however, that typical adaptive evaluations are incomplete …

Explainable deep learning: A field guide for the uninitiated

G Ras, N Xie, M Van Gerven, D Doran - Journal of Artificial Intelligence …, 2022 - jair.org
Deep neural networks (DNNs) are an indispensable machine learning tool despite the
difficulty of diagnosing what aspects of a model's input drive its decisions. In countless real …

Hidden trigger backdoor attacks

A Saha, A Subramanya, H Pirsiavash - Proceedings of the AAAI …, 2020 - ojs.aaai.org
With the success of deep learning algorithms in various domains, studying adversarial
attacks to secure deep models in real world applications has become an important research …

An abstract domain for certifying neural networks

G Singh, T Gehr, M Püschel, M Vechev - Proceedings of the ACM on …, 2019 - dl.acm.org
We present a novel method for scalable and precise certification of deep neural networks.
The key technical insight behind our approach is a new abstract domain which combines …

Frequency-driven imperceptible adversarial attack on semantic similarity

C Luo, Q Lin, W Xie, B Wu, J Xie… - Proceedings of the …, 2022 - openaccess.thecvf.com
Current adversarial attack research reveals the vulnerability of learning-based classifiers
against carefully crafted perturbations. However, most existing attack methods have inherent …

Adversarial examples: Attacks and defenses for deep learning

X Yuan, P He, Q Zhu, X Li - IEEE transactions on neural …, 2019 - ieeexplore.ieee.org
With rapid progress and significant successes in a wide spectrum of applications, deep
learning is being applied in many safety-critical environments. However, deep neural …

Ai2: Safety and robustness certification of neural networks with abstract interpretation

T Gehr, M Mirman, D Drachsler-Cohen… - … IEEE symposium on …, 2018 - ieeexplore.ieee.org
We present AI 2, the first sound and scalable analyzer for deep neural networks. Based on
overapproximation, AI 2 can automatically prove safety properties (eg, robustness) of …