过去一年中添加的文章,按日期排序

CoDi-2: In-Context Interleaved and Interactive Any-to-Any Generation

Z Tang, Z Yang, M Khademi, Y Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2024openaccess.thecvf.com
248 天前 - Abstract We present CoDi-2 a Multimodal Large Language Model (MLLM) for
learning in-context interleaved multimodal representations. By aligning modalities with
language for both encoding and generation CoDi-2 empowers Large Language Models
(LLMs) to understand modality-interleaved instructions and in-context examples and
autoregressively generate grounded and coherent multimodal outputs in an any-to-any input-
output modality paradigm. To train CoDi-2 we build a large-scale generation dataset …
Abstract
We present CoDi-2 a Multimodal Large Language Model (MLLM) for learning in-context interleaved multimodal representations. By aligning modalities with language for both encoding and generation CoDi-2 empowers Large Language Models (LLMs) to understand modality-interleaved instructions and in-context examples and autoregressively generate grounded and coherent multimodal outputs in an any-to-any input-output modality paradigm. To train CoDi-2 we build a large-scale generation dataset encompassing in-context multimodal instructions across text vision and audio. CoDi-2 demonstrates a wide range of zero-shot and few-shot capabilities for tasks like editing exemplar learning composition reasoning etc. CoDi-2 surpasses previous domain-specific models on tasks such as subject-driven image generation vision transformation and audio editing and showcases a significant advancement for integrating diverse multimodal tasks with sequential generation.
openaccess.thecvf.com