[HTML][HTML] The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data

C Tsvetkov, G Malhotra, BD Evans, JS Bowers - Neural Networks, 2023 - Elsevier
Convolutional neural networks (CNNs) are often described as promising models of human
vision, yet they show many differences from human abilities. We focus on a superhuman …

[PDF][PDF] Adding biological constraints to deep neural networks reduces their capacity to learn unstructured data.

C Tsvetkov, G Malhotra, B Evans, JS Bowers - CogSci, 2020 - cognitivesciencesociety.org
Deep neural networks (DNNs) are becoming increasingly popular as a model of the human
visual system. However, they show behaviours that are uncharacteristic of humans …

[PDF][PDF] Convolutional Neural Networks on Randomized Data.

C Ivan - CVPR Workshops, 2019 - openaccess.thecvf.com
Abstract Convolutional Neural Networks (CNNs) are build specifically for computer vision
tasks for which it is known that the input data is a hierarchical structure based on locally …

A capacity scaling law for artificial neural networks

G Friedland, M Krell - arXiv preprint arXiv:1708.06019, 2017 - arxiv.org
We derive the calculation of two critical numbers predicting the behavior of perceptron
networks. First, we derive the calculation of what we call the lossless memory (LM) …

[HTML][HTML] Doing the impossible: Why neural networks can be trained at all

NO Hodas, P Stinis - Frontiers in psychology, 2018 - frontiersin.org
As deep neural networks grow in size, from thousands to millions to billions of weights, the
performance of those networks becomes limited by our ability to accurately train them. A …

Training batchnorm and only batchnorm: On the expressive power of random features in cnns

J Frankle, DJ Schwab, AS Morcos - arXiv preprint arXiv:2003.00152, 2020 - arxiv.org
A wide variety of deep learning techniques from style transfer to multitask learning rely on
training affine transformations of features. Most prominent among these is the popular …

​ Can deep learning networks acquire the robustness of human recognition when faced with objects in visual noise?

H JANG, F TONG - Journal of Vision, 2018 - jov.arvojournals.org
Convolutional neural networks (CNNs) have attracted considerable attention for their
remarkable performance at a variety of cognitive tasks, including visual object recognition …

How to characterize the landscape of overparameterized convolutional neural networks

Y Gu, W Zhang, C Fang, JD Lee… - Advances in Neural …, 2020 - proceedings.neurips.cc
For many initialization schemes, parameters of two randomly initialized deep neural
networks (DNNs) can be quite different, but feature distributions of the hidden nodes are …

On the reproducibility of neural network predictions

S Bhojanapalli, K Wilber, A Veit, AS Rawat… - arXiv preprint arXiv …, 2021 - arxiv.org
Standard training techniques for neural networks involve multiple sources of randomness,
eg, initialization, mini-batch ordering and in some cases data augmentation. Given that …

[HTML][HTML] High frequency accuracy and loss data of random neural networks trained on image datasets

AK Rorabaugh, S Caíno-Lores, T Johnston, M Taufer - Data in Brief, 2022 - Elsevier
Abstract Neural Networks (NNs) are increasingly used across scientific domains to extract
knowledge from experimental or computational data. An NN is composed of natural or …