Neural fields (NeFs) have recently emerged as a versatile method for modeling signals of various modalities including images shapes and scenes. Subsequently a number of works …
In recent years, there have been attempts to increase the kernel size of Convolutional Neural Nets (CNNs) to mimic the global receptive field of Vision Transformers'(ViTs) self …
J Cho, S Nam, D Rho, JH Ko, E Park - European Conference on Computer …, 2022 - Springer
Neural fields have emerged as a new data representation paradigm and have shown remarkable success in various signal representations. Since they preserve signals in their …
Implicit neural fields, typically encoded by a multilayer perceptron (MLP) that maps from coordinates (eg, xyz) to signals (eg, signed distances), have shown remarkable promise as …
Neural fields have emerged as a powerful and broadly applicable method for representing signals. However in contrast to classical discrete digital signal processing the portfolio of …
The original ImageNet dataset is a popular large-scale benchmark for training Deep Neural Networks. Since the cost of performing experiments (eg, algorithm design, architecture …
Dot-product attention mechanism plays a crucial role in modern deep architectures (eg, Transformer) for sequence modeling, however, naïve exact computation of this model incurs …
Neural Representations have recently been shown to effectively reconstruct a wide range of signals from 3D meshes and shapes to images and videos. We show that, when adapted …
S Takashima, R Hayamizu, N Inoue… - Proceedings of the …, 2023 - openaccess.thecvf.com
Formula-driven supervised learning (FDSL) has been shown to be an effective method for pre-training vision transformers, where ExFractalDB-21k was shown to exceed the pre …