Understanding image representations by measuring their equivariance and equivalence

K Lenc, A Vedaldi - Proceedings of the IEEE conference on …, 2015 - cv-foundation.org
Despite the importance of image representations such as histograms of oriented gradients
and deep Convolutional Neural Networks (CNN), our theoretical understanding of them …

Decoupled networks

W Liu, Z Liu, Z Yu, B Dai, R Lin… - Proceedings of the …, 2018 - openaccess.thecvf.com
Inner product-based convolution has been a central component of convolutional neural
networks (CNNs) and the key to learning visual representations. Inspired by the observation …

Deformable convolutional networks

J Dai, H Qi, Y Xiong, Y Li, G Zhang… - Proceedings of the …, 2017 - openaccess.thecvf.com
Convolutional neural networks (CNNs) are inherently limited to model geometric
transformations due to the fixed geometric structures in its building modules. In this work, we …

Quantifying translation-invariance in convolutional neural networks

E Kauderer-Abrams - arXiv preprint arXiv:1801.01450, 2017 - arxiv.org
A fundamental problem in object recognition is the development of image representations
that are invariant to common transformations such as translation, rotation, and small …

Harmonic networks: Deep translation and rotation equivariance

DE Worrall, SJ Garbin… - Proceedings of the …, 2017 - openaccess.thecvf.com
Translating or rotating an input image should not affect the results of many computer vision
tasks. Convolutional neural networks (CNNs) are already translation equivariant: input …

Network dissection: Quantifying interpretability of deep visual representations

D Bau, B Zhou, A Khosla, A Oliva… - Proceedings of the …, 2017 - openaccess.thecvf.com
We propose a general framework called Network Dissection for quantifying the
interpretability of latent representations of CNNs by evaluating the alignment between …

Equivariant transformer networks

KS Tai, P Bailis, G Valiant - International Conference on …, 2019 - proceedings.mlr.press
How can prior knowledge on the transformation invariances of a domain be incorporated
into the architecture of a neural network? We propose Equivariant Transformers (ETs), a …

Inverting visual representations with convolutional networks

A Dosovitskiy, T Brox - Proceedings of the IEEE conference on …, 2016 - cv-foundation.org
Feature representations, both hand-designed and learned ones, are often hard to analyze
and interpret, even when they are extracted from visual data. We propose a new approach to …

Interpreting deep visual representations via network dissection

B Zhou, D Bau, A Oliva… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
The success of recent deep convolutional neural networks (CNNs) depends on learning
hidden representations that can summarize the important factors of variation behind the …

Seeing implicit neural representations as fourier series

N Benbarka, T Höfer, A Zell - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Implicit Neural Representations (INR) use multilayer perceptrons to represent high-
frequency functions in low-dimensional problem domains. Recently these representations …