Machine learning and deep learning—A review for ecologists

M Pichler, F Hartig - Methods in Ecology and Evolution, 2023 - Wiley Online Library
The popularity of machine learning (ML), deep learning (DL) and artificial intelligence (AI)
has risen sharply in recent years. Despite this spike in popularity, the inner workings of ML …

Neural fields in visual computing and beyond

Y Xie, T Takikawa, S Saito, O Litany… - Computer Graphics …, 2022 - Wiley Online Library
Recent advances in machine learning have led to increased interest in solving visual
computing problems using methods that employ coordinate‐based neural networks. These …

Unsupervised 3d perception with 2d vision-language distillation for autonomous driving

M Najibi, J Ji, Y Zhou, CR Qi, X Yan… - Proceedings of the …, 2023 - openaccess.thecvf.com
Closed-set 3D perception models trained on only a pre-defined set of object categories can
be inadequate for safety critical applications such as autonomous driving where new object …

On the spectral bias of two-layer linear networks

AV Varre, ML Vladarean… - Advances in …, 2024 - proceedings.neurips.cc
This paper studies the behaviour of two-layer fully connected networks with linear
activations trained with gradient flow on the square loss. We show how the optimization …

Sharpness-aware minimization leads to low-rank features

M Andriushchenko, D Bahri… - Advances in Neural …, 2023 - proceedings.neurips.cc
Sharpness-aware minimization (SAM) is a recently proposed method that minimizes the
sharpness of the training loss of a neural network. While its generalization improvement is …

Stochastic collapse: How gradient noise attracts sgd dynamics towards simpler subnetworks

F Chen, D Kunin, A Yamamura… - Advances in Neural …, 2024 - proceedings.neurips.cc
In this work, we reveal a strong implicit bias of stochastic gradient descent (SGD) that drives
overly expressive networks to much simpler subnetworks, thereby dramatically reducing the …

Towards the difficulty for a deep neural network to learn concepts of different complexities

D Liu, H Deng, X Cheng, Q Ren… - Advances in Neural …, 2024 - proceedings.neurips.cc
This paper theoretically explains the intuition that simple concepts are more likely to be
learned by deep neural networks (DNNs) than complex concepts. In fact, recent studies …

Can contrastive learning avoid shortcut solutions?

J Robinson, L Sun, K Yu… - Advances in neural …, 2021 - proceedings.neurips.cc
The generalization of representations learned via contrastive learning depends crucially on
what features of the data are extracted. However, we observe that the contrastive loss does …

The difficulty of passive learning in deep reinforcement learning

G Ostrovski, PS Castro… - Advances in Neural …, 2021 - proceedings.neurips.cc
Learning to act from observational data without active environmental interaction is a well-
known challenge in Reinforcement Learning (RL). Recent approaches involve constraints …

Attribute-aware deep hashing with self-consistency for large-scale fine-grained image retrieval

XS Wei, Y Shen, X Sun, P Wang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Our work focuses on tackling large-scale fine-grained image retrieval as ranking the images
depicting the concept of interests (ie, the same sub-category labels) highest based on the …