Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks

I Hubara, B Chmiel, M Island… - Advances in neural …, 2021 - proceedings.neurips.cc
Unstructured pruning reduces the memory footprint in deep neural networks (DNNs).
Recently, researchers proposed different types of structural pruning intending to reduce also …

Soft masking for cost-constrained channel pruning

R Humble, M Shen, JA Latorre, E Darve… - European Conference on …, 2022 - Springer
Structured channel pruning has been shown to significantly accelerate inference time for
convolution neural networks (CNNs) on modern hardware, with a relatively minor loss of …

Membership inference via backdooring

H Hu, Z Salcic, G Dobbie, J Chen, L Sun… - arXiv preprint arXiv …, 2022 - arxiv.org
Recently issued data privacy regulations like GDPR (General Data Protection Regulation)
grant individuals the right to be forgotten. In the context of machine learning, this requires a …

Optimal fine-grained n: M sparsity for activations and neural gradients

B Chmiel, I Hubara, R Banner, D Soudry - arXiv preprint arXiv:2203.10991, 2022 - arxiv.org
In deep learning, fine-grained N: M sparsity reduces the data footprint and bandwidth of a
General Matrix multiply (GEMM) by x2, and doubles throughput by skipping computation of …

Data isotopes for data provenance in dnns

E Wenger, X Li, BY Zhao, V Shmatikov - arXiv preprint arXiv:2208.13893, 2022 - arxiv.org
Today, creators of data-hungry deep neural networks (DNNs) scour the Internet for training
fodder, leaving users with little control over or knowledge of when their data is appropriated …

Resource Efficient Deep Learning Hardware Watermarks with Signature Alignment

J Clements, Y Lao - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Deep learning intellectual properties (IPs) are high-value assets that are frequently
susceptible to theft. This vulnerability has led to significant interest in defending the field's …

Minimum variance unbiased n: M sparsity for the neural gradients

B Chmiel, I Hubara, R Banner… - The Eleventh International …, 2023 - openreview.net
In deep learning, fine-grained N: M sparsity reduces the data footprint and bandwidth of a
General Matrix multiply (GEMM) up to x2, and doubles throughput by skipping computation …

Reclaiming Data Agency in the Age of Ubiquitous Machine Learning

EJ Wenger - 2023 - search.proquest.com
As machine learning (ML) models have grown in size and scope in recent years, so has the
amount of data needed to train them. Unfortunately, individuals whose data is used in large …

Anomaly Detection in X-Ray Physics

RA Humble - 2023 - search.proquest.com
Anomaly detection is an important task for complex systems (eg, industrial facilities,
manufacturing, large-scale science experiments), where failures in a sub-system can lead to …

On Identifying and Mitigating Against Vulnerabilities of Machine Learning Models

H Hu - 2022 - researchspace.auckland.ac.nz
During the last decade, machine learning (ML) has achieved tremendous results in many
fields, from traditional learning tasks like image recognition to advanced applications such …