Stylegan-nada: Clip-guided domain adaptation of image generators

R Gal, O Patashnik, H Maron, AH Bermano… - ACM Transactions on …, 2022 - dl.acm.org
Can a generative model be trained to produce images from a specific domain, guided only
by a text prompt, without seeing any image? In other words: can an image generator be …

Text2human: Text-driven controllable human image generation

Y Jiang, S Yang, H Qiu, W Wu, CC Loy… - ACM Transactions on …, 2022 - dl.acm.org
Generating high-quality and diverse human images is an important yet challenging task in
vision and graphics. However, existing generative models often fall short under the high …

Styleheat: One-shot high-resolution editable talking face generation via pre-trained stylegan

F Yin, Y Zhang, X Cun, M Cao, Y Fan, X Wang… - European conference on …, 2022 - Springer
One-shot talking face generation aims at synthesizing a high-quality talking face video from
an arbitrary portrait image, driven by a video or an audio segment. In this work, we provide a …

Pastiche master: Exemplar-based high-resolution portrait style transfer

S Yang, L Jiang, Z Liu, CC Loy - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Recent studies on StyleGAN show high performance on artistic portrait generation by
transfer learning with limited data. In this paper, we explore more challenging exemplar …

Vtoonify: Controllable high-resolution portrait video style transfer

S Yang, L Jiang, Z Liu, CC Loy - ACM Transactions on Graphics (TOG), 2022 - dl.acm.org
Generating high-quality artistic portrait videos is an important and desirable task in computer
graphics and vision. Although a series of successful portrait image toonification models built …

3davatargan: Bridging domains for personalized editable avatars

R Abdal, HY Lee, P Zhu, M Chai… - Proceedings of the …, 2023 - openaccess.thecvf.com
Modern 3D-GANs synthesize geometry and texture by training on large-scale datasets with
a consistent structure. Training such models on stylized, artistic data, with often unknown …

Cartoon image processing: a survey

Y Zhao, D Ren, Y Chen, W Jia, R Wang… - International Journal of …, 2022 - Springer
With the rapid development of cartoon industry, various studies on two-dimensional (2D)
cartoon have been proposed for different application scenarios, such as quality assessment …

Avatargen: a 3d generative model for animatable human avatars

J Zhang, Z Jiang, D Yang, H Xu, Y Shi, G Song… - … on Computer Vision, 2022 - Springer
Unsupervised generation of clothed virtual humans with various appearance and
animatable poses is important for creating 3D human avatars and other AR/VR applications …

State‐of‐the‐Art in the Architecture, Methods and Applications of StyleGAN

AH Bermano, R Gal, Y Alaluf, R Mokady… - Computer Graphics …, 2022 - Wiley Online Library
Abstract Generative Adversarial Networks (GANs) have established themselves as a
prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study …

Scenimefy: learning to craft anime scene via semi-supervised image-to-image translation

Y Jiang, L Jiang, S Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Automatic high-quality rendering of anime scenes from complex real-world images is of
significant practical value. The challenges of this task lie in the complexity of the scenes, the …