Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations. Inspired by the observation …
Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we …
E Kauderer-Abrams - arXiv preprint arXiv:1801.01450, 2017 - arxiv.org
A fundamental problem in object recognition is the development of image representations that are invariant to common transformations such as translation, rotation, and small …
Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input …
We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between …
How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a …
A Dosovitskiy, T Brox - Proceedings of the IEEE conference on …, 2016 - cv-foundation.org
Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to …
B Zhou, D Bau, A Oliva… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the …
N Benbarka, T Höfer, A Zell - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Implicit Neural Representations (INR) use multilayer perceptrons to represent high- frequency functions in low-dimensional problem domains. Recently these representations …