Information leakage in embedding models

C Song, A Raghunathan - Proceedings of the 2020 ACM SIGSAC …, 2020 - dl.acm.org
Embeddings are functions that map raw input data to low-dimensional vector
representations, while preserving important semantic information about the inputs. Pre …

Minerva: Enabling low-power, highly-accurate deep neural network accelerators

B Reagen, P Whatmough, R Adolf, S Rama… - ACM SIGARCH …, 2016 - dl.acm.org
The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked
a trend of accelerating their execution with specialized hardware. While published designs …

Exponential expressivity in deep neural networks through transient chaos

B Poole, S Lahiri, M Raghu… - Advances in neural …, 2016 - proceedings.neurips.cc
We combine Riemannian geometry with the mean field theory of high dimensional chaos to
study the nature of signal propagation in deep neural networks with random weights. Our …

Scale-sim: Systolic cnn accelerator simulator

A Samajdar, Y Zhu, P Whatmough, M Mattina… - arXiv preprint arXiv …, 2018 - arxiv.org
Systolic Arrays are one of the most popular compute substrates within Deep Learning
accelerators today, as they provide extremely high efficiency for running dense matrix …

Tactics of adversarial attack on deep reinforcement learning agents

YC Lin, ZW Hong, YH Liao, ML Shih, MY Liu… - arXiv preprint arXiv …, 2017 - arxiv.org
We introduce two tactics to attack agents trained by deep reinforcement learning algorithms
using adversarial examples, namely the strategically-timed attack and the enchanting attack …

Phased lstm: Accelerating recurrent network training for long or event-based sequences

D Neil, M Pfeiffer, SC Liu - Advances in neural information …, 2016 - proceedings.neurips.cc
Abstract Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for
extracting patterns from temporal sequences. Current RNN models are ill suited to process …

Energy-fluctuated multiscale feature learning with deep convnet for intelligent spindle bearing fault diagnosis

X Ding, Q He - IEEE Transactions on Instrumentation and …, 2017 - ieeexplore.ieee.org
Considering various health conditions under varying operational conditions, the mining
sensitive feature from the measured signals is still a great challenge for intelligent fault …

Understanding the disharmony between dropout and batch normalization by variance shift

X Li, S Chen, X Hu, J Yang - … of the IEEE/CVF conference on …, 2019 - openaccess.thecvf.com
This paper first answers the question" why do the two most powerful techniques Dropout and
Batch Normalization (BN) often lead to a worse performance when they are combined …

EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding

Y Miao, M Gowayyed, F Metze - 2015 IEEE workshop on …, 2015 - ieeexplore.ieee.org
The performance of automatic speech recognition (ASR) has improved tremendously due to
the application of deep neural networks (DNNs). Despite this progress, building a new ASR …

Stripes: Bit-serial deep neural network computing

P Judd, J Albericio, T Hetherington… - 2016 49th Annual …, 2016 - ieeexplore.ieee.org
Motivated by the variance in the numerical precision requirements of Deep Neural Networks
(DNNs)[1],[2], Stripes (STR), a hardware accelerator is presented whose execution time …