Emergent correspondence from image diffusion

L Tang, M Jia, Q Wang, CP Phoo… - Advances in Neural …, 2023 - proceedings.neurips.cc
Finding correspondences between images is a fundamental problem in computer vision. In
this paper, we show that correspondence emerges in image diffusion models without any …

Aiatrack: Attention in attention for transformer visual tracking

S Gao, C Zhou, C Ma, X Wang, J Yuan - European Conference on …, 2022 - Springer
Transformer trackers have achieved impressive advancements recently, where the attention
mechanism plays an important role. However, the independent correlation computation in …

Geometric transformer for fast and robust point cloud registration

Z Qin, H Yu, C Wang, Y Guo… - Proceedings of the …, 2022 - openaccess.thecvf.com
We study the problem of extracting accurate correspondences for point cloud registration.
Recent keypoint-free methods bypass the detection of repeatable keypoints which is difficult …

Tracking everything everywhere all at once

Q Wang, YY Chang, R Cai, Z Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present a new test-time optimization method for estimating dense and long-range motion
from a video sequence. Prior optical flow or particle video tracking algorithms typically …

Graph neural networks: foundation, frontiers and applications

L Wu, P Cui, J Pei, L Zhao, X Guo - … of the 28th ACM SIGKDD Conference …, 2022 - dl.acm.org
The field of graph neural networks (GNNs) has seen rapid and incredible strides over the
recent years. Graph neural networks, also known as deep learning on graphs, graph …

LoFTR: Detector-free local feature matching with transformers

J Sun, Z Shen, Y Wang, H Bao… - Proceedings of the …, 2021 - openaccess.thecvf.com
We present a novel method for local image feature matching. Instead of performing image
feature detection, description, and matching sequentially, we propose to first establish pixel …

Aspanformer: Detector-free image matching with adaptive span transformer

H Chen, Z Luo, L Zhou, Y Tian, M Zhen, T Fang… - … on Computer Vision, 2022 - Springer
Generating robust and reliable correspondences across images is a fundamental task for a
diversity of applications. To capture context at both global and local granularity, we propose …

Cost aggregation with 4d convolutional swin transformer for few-shot segmentation

S Hong, S Cho, J Nam, S Lin, S Kim - European Conference on Computer …, 2022 - Springer
This paper presents a novel cost aggregation network, called Volumetric Aggregation with
Transformers (VAT), for few-shot segmentation. The use of transformers can benefit …

Hierarchical dense correlation distillation for few-shot segmentation

B Peng, Z Tian, X Wu, C Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Few-shot semantic segmentation (FSS) aims to form class-agnostic models segmenting
unseen classes with only a handful of annotations. Previous methods limited to the semantic …

Relational embedding for few-shot classification

D Kang, H Kwon, J Min, M Cho - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We propose to address the problem of few-shot classification by meta-learning" what to
observe" and" where to attend" in a relational perspective. Our method leverages relational …