Style injection in diffusion: A training-free approach for adapting large-scale diffusion models for style transfer

J Chung, S Hyun, JP Heo - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Despite the impressive generative capabilities of diffusion models existing diffusion model-
based style transfer methods require inference-stage optimization (eg fine-tuning or textual …

EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models

J Yang, J Feng, H Huang - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Recent years have witnessed remarkable progress in image generation task where users
can create visually astonishing images with high-quality. However exsiting text-to-image …

Concept-centric personalization with large-scale diffusion priors

P Cao, L Yang, F Zhou, T Huang, Q Song - arXiv preprint arXiv …, 2023 - arxiv.org
Despite large-scale diffusion models being highly capable of generating diverse open-world
content, they still struggle to match the photorealism and fidelity of concept-specific …

Pick-and-draw: Training-free semantic guidance for text-to-image personalization

H Lv, J Xiao, L Li, Q Huang - arXiv preprint arXiv:2401.16762, 2024 - arxiv.org
Diffusion-based text-to-image personalization have achieved great success in generating
subjects specified by users among various contexts. Even though, existing finetuning-based …

Portrait diffusion: Training-free face stylization with chain-of-painting

J Liu, H Huang, C Jin, R He - arXiv preprint arXiv:2312.02212, 2023 - arxiv.org
Face stylization refers to the transformation of a face into a specific portrait style. However,
current methods require the use of example-based adaptation approaches to fine-tune pre …

Visual Layout Composer: Image-Vector Dual Diffusion Model for Design Layout Generation

MA Shabani, Z Wang, D Liu, N Zhao… - Proceedings of the …, 2024 - openaccess.thecvf.com
This paper proposes an image-vector dual diffusion model for generative layout design.
Distinct from prior efforts that mostly ignore element-level visual information our approach …

U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation

Y Wu, K Liu, X Mi, F Tang, J Cao… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Concept personalization methods enable large text-to-image models to learn
specific subjects (eg objects/poses/3D models) and synthesize renditions in new contexts …

Complex Style Image Transformations for Domain Generalization in Medical Images

N Spanos, A Arsenos, PA Theofilou… - Proceedings of the …, 2024 - openaccess.thecvf.com
The absence of well-structured large datasets in medical computer vision results in
decreased performance of automated systems and especially of deep learning models …

Freestyle: Free lunch for text-guided style transfer using diffusion models

F He, G Li, M Zhang, L Yan, L Si, F Li - arXiv preprint arXiv:2401.15636, 2024 - arxiv.org
The rapid development of generative diffusion models has significantly advanced the field of
style transfer. However, most current style transfer methods based on diffusion models …

Closed-Loop Unsupervised Representation Disentanglement with -VAE Distillation and Diffusion Probabilistic Feedback

X Jin, B Li, B Xie, W Zhang, J Liu, Z Li, T Yang… - arXiv preprint arXiv …, 2024 - arxiv.org
Representation disentanglement may help AI fundamentally understand the real world and
thus benefit both discrimination and generation tasks. It currently has at least three …