Dragdiffusion: Harnessing diffusion models for interactive point-based image editing

Y Shi, C Xue, JH Liew, J Pan, H Yan… - Proceedings of the …, 2024 - openaccess.thecvf.com
Accurate and controllable image editing is a challenging task that has attracted significant
attention recently. Notably DragGAN developed by Pan et al.(2023) is an interactive point …

Image sculpting: Precise object editing with 3d geometry control

J Yenphraphai, X Pan, S Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract We present Image Sculpting a new framework for editing 2D images by
incorporating tools from 3D geometry and graphics. This approach differs markedly from …

Drag your noise: Interactive point-based editing via diffusion semantic propagation

H Liu, C Xu, Y Yang, L Zeng… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Point-based interactive editing serves as an essential tool to complement the controllability
of existing generative models. A concurrent work DragDiffusion updates the diffusion latent …

Editablenerf: Editing topologically varying neural radiance fields by key points

C Zheng, W Lin, F Xu - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Neural radiance fields (NeRF) achieve highly photo-realistic novel-view synthesis, but it's a
challenging problem to edit the scenes modeled by NeRF-based methods, especially for …

GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image

C Bao, Y Zhang, Y Li, X Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Recently we have witnessed the explosive growth of various volumetric representations in
modeling animatable head avatars. However due to the diversity of frameworks there is no …

Mini-dalle3: Interactive text to image by prompting large language models

L Zeqiang, Z Xizhou, D Jifeng, Q Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
The revolution of artificial intelligence content generation has been rapidly accelerated with
the booming text-to-image (T2I) diffusion models. Within just two years of development, it …

Wear-any-way: Manipulable virtual try-on via sparse correspondence alignment

M Chen, X Chen, Z Zhai, C Ju, X Hong, J Lan… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper introduces a novel framework for virtual try-on, termed Wear-Any-Way. Different
from previous methods, Wear-Any-Way is a customizable solution. Besides generating high …

A Survey of Multimodal-Guided Image Editing with Text-to-Image Diffusion Models

X Shuai, H Ding, X Ma, R Tu, YG Jiang… - arXiv preprint arXiv …, 2024 - arxiv.org
Image editing aims to edit the given synthetic or real image to meet the specific requirements
from users. It is widely studied in recent years as a promising and challenging field of …

StableDrag: Stable Dragging for Point-based Image Editing

Y Cui, X Zhao, G Zhang, S Cao, K Ma… - arXiv preprint arXiv …, 2024 - arxiv.org
Point-based image editing has attracted remarkable attention since the emergence of
DragGAN. Recently, DragDiffusion further pushes forward the generative quality via …

Diffusion Model-Based Video Editing: A Survey

W Sun, RC Tu, J Liao, D Tao - arXiv preprint arXiv:2407.07111, 2024 - arxiv.org
The rapid development of diffusion models (DMs) has significantly advanced image and
video applications, making" what you want is what you see" a reality. Among these, video …