A survey on modern trainable activation functions

A Apicella, F Donnarumma, F Isgrò, R Prevete - Neural Networks, 2021 - Elsevier
In neural networks literature, there is a strong interest in identifying and defining activation
functions which can improve neural network performance. In recent years there has been a …

A critique of pure learning and what artificial neural networks can learn from animal brains

AM Zador - Nature communications, 2019 - nature.com
Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better
supervised learning algorithms. However, in stark contrast to young animals (including …

On feature decorrelation in self-supervised learning

T Hua, W Wang, Z Xue, S Ren… - Proceedings of the …, 2021 - openaccess.thecvf.com
In self-supervised representation learning, a common idea behind most of the state-of-the-
art approaches is to enforce the robustness of the representations to predefined …

Score-based diffusion models as principled priors for inverse imaging

BT Feng, J Smith, M Rubinstein… - Proceedings of the …, 2023 - openaccess.thecvf.com
Priors are essential for reconstructing images from noisy and/or incomplete measurements.
The choice of the prior determines both the quality and uncertainty of recovered images. We …

Epigenomic dissection of Alzheimer's disease pinpoints causal variants and reveals epigenome erosion

X Xiong, BT James, CA Boix, YP Park, K Galani… - Cell, 2023 - cell.com
Recent work has identified dozens of non-coding loci for Alzheimer's disease (AD) risk, but
their mechanisms and AD transcriptional regulatory circuitry are poorly understood. Here …

Deep learning in spiking neural networks

A Tavanaei, M Ghodrati, SR Kheradpisheh… - Neural networks, 2019 - Elsevier
In recent years, deep learning has revolutionized the field of machine learning, for computer
vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is …

Finite versus infinite neural networks: an empirical study

J Lee, S Schoenholz, J Pennington… - Advances in …, 2020 - proceedings.neurips.cc
We perform a careful, thorough, and large scale empirical study of the correspondence
between wide neural networks and kernel methods. By doing so, we resolve a variety of …

[图书][B] Dynamic mode decomposition: data-driven modeling of complex systems

The integration of data and scientific computation is driving a paradigm shift across the
engineering, natural, and physical sciences. Indeed, there exists an unprecedented …

Hiding images in plain sight: Deep steganography

S Baluja - Advances in neural information processing …, 2017 - proceedings.neurips.cc
Steganography is the practice of concealing a secret message within another, ordinary,
message. Commonly, steganography is used to unobtrusively hide a small message within …

Neural tuning and representational geometry

N Kriegeskorte, XX Wei - Nature Reviews Neuroscience, 2021 - nature.com
A central goal of neuroscience is to understand the representations formed by brain activity
patterns and their connection to behaviour. The classic approach is to investigate how …