The brain has a well-known large scale organization at the level of brain regions, where different regions have been argued to primarily serve different behavioral functions. This …
NK Jha, B Reagen - arXiv preprint arXiv:2410.13060, 2024 - arxiv.org
The pervasiveness of proprietary language models has raised privacy concerns for users' sensitive data, emphasizing the need for private inference (PI), where inference is performed …
NK Jha, B Reagen - arXiv preprint arXiv:2410.09637, 2024 - arxiv.org
LayerNorm is a critical component in modern large language models (LLMs) for stabilizing training and ensuring smooth optimization. However, it introduces significant challenges in …
Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) …
R Srinath, MM Czarnik, MR Cohen - bioRxiv, 2024 - pmc.ncbi.nlm.nih.gov
We use sensory information in remarkably flexible ways. We can generalize by ignoring task- irrelevant features, report different features of a stimulus, and use different actions to report a …
Trained recurrent neural networks (RNNs) have become the leading framework for modeling neural dynamics in the brain, owing to their capacity to mimic how population-level …
The progression of neuroscience relies on the discovery of structure in the brain. From the discovery of neurons to the structure of the potassium channel, and, in recent years, the …
Models trained with self-supervised learning objectives have recently matched or surpassed models trained with traditional supervised object recognition in their ability to predict neural …
Delayed generalization, also known as``grokking'', has emerged as a well-replicated phenomenon in overparameterized neural networks. Recent theoretical works associated …