This paper proposes LLaFS the first attempt to leverage large language models (LLMs) in few-shot segmentation. In contrast to the conventional few-shot segmentation methods that …
Y Wang, N Luo, T Zhang - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Few-shot segmentation (FSS) aims to segment objects of new categories given only a handful of annotated samples. Previous works focus their efforts on exploring the support …
Y Sun, J Chen, S Zhang, X Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
In this paper we propose a novel Visual Reference Prompt (VRP) encoder that empowers the Segment Anything Model (SAM) to utilize annotated reference images as prompts for …
W Tan, S Chen, B Yan - arXiv preprint arXiv:2307.00773, 2023 - arxiv.org
Diffusion models have demonstrated excellent performance in image generation. Although various few-shot semantic segmentation (FSS) models with different network structures have …
X Luo, Z Tian, T Zhang, B Yu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
In this work, we revisit the prior mask guidance proposed in “Prior Guided Feature Enrichment Network for Few-Shot Segmentation”. The prior mask serves as an indicator that …
This study addresses the Domain-Class Incremental Learning problem, a realistic but challenging continual learning scenario where both the domain distribution and target …
B Peng, Z Tian, S Liu, M Yang, J Jia - arXiv preprint arXiv:2404.07470, 2024 - arxiv.org
Continual learning has gained increasing importance as it facilitates the acquisition and refinement of scalable knowledge and skills in language models. However, existing …
S Wu, H Tan, Z Tian, Y Chen… - Proceedings of the …, 2024 - openaccess.thecvf.com
Vision-language pre-training (VLP) aims to learn joint representations of vision and language modalities. The contrastive paradigm is currently dominant in this field. However …
Plant segmentation is a challenging computer vision task due to plant images complexity. For many practical problems, we have to solve even more difficult tasks. We need to …