Hyperreenact: one-shot reenactment via jointly learning to refine and retarget faces

S Bounareli, C Tzelepis, V Argyriou… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this paper, we present our method for neural face reenactment, called HyperReenact, that
aims to generate realistic talking head images of a source identity, driven by a target facial …

Attribute-preserving face dataset anonymization via latent code optimization

S Barattin, C Tzelepis, I Patras… - Proceedings of the …, 2023 - openaccess.thecvf.com
This work addresses the problem of anonymizing the identity of faces in a dataset of images,
such that the privacy of those depicted is not violated, while at the same time the dataset is …

Finding directions in gan's latent space for neural face reenactment

S Bounareli, V Argyriou, G Tzimiropoulos - arXiv preprint arXiv …, 2022 - arxiv.org
This paper is on face/head reenactment where the goal is to transfer the facial pose (3D
head orientation and expression) of a target face to a source face. Previous methods focus …

Stylemask: Disentangling the style space of stylegan2 for neural face reenactment

S Bounareli, C Tzelepis, V Argyriou… - 2023 IEEE 17th …, 2023 - ieeexplore.ieee.org
In this paper we address the problem of neural face reenactment, where, given a pair of a
source and a target facial image, we need to transfer the target's pose (defined as the head …

One-Shot Neural Face Reenactment via Finding Directions in GAN's Latent Space

S Bounareli, C Tzelepis, V Argyriou, I Patras… - International Journal of …, 2024 - Springer
In this paper, we present our framework for neural face/head reenactment whose goal is to
transfer the 3D head orientation and expression of a target face to a source face. Previous …

Bridging clip and stylegan through latent alignment for image editing

W Zheng, Q Li, X Guo, P Wan, Z Wang - arXiv preprint arXiv:2210.04506, 2022 - arxiv.org
Text-driven image manipulation is developed since the vision-language model (CLIP) has
been proposed. Previous work has adopted CLIP to design a text-image consistency-based …

Deep Curvilinear Editing: Commutative and Nonlinear Image Manipulation for Pretrained Deep Generative Model

T Aoshima, T Matsubara - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Semantic editing of images is the fundamental goal of computer vision. Although deep
learning methods, such as generative adversarial networks (GANs), are capable of …

clip2latent: Text driven sampling of a pre-trained stylegan using denoising diffusion and clip

JNM Pinkney, C Li - arXiv preprint arXiv:2210.02347, 2022 - arxiv.org
We introduce a new method to efficiently create text-to-image models from a pre-trained
CLIP and StyleGAN. It enables text driven sampling with an existing generative model …

DiffusionAct: Controllable Diffusion Autoencoder for One-shot Face Reenactment

S Bounareli, C Tzelepis, V Argyriou, I Patras… - arXiv preprint arXiv …, 2024 - arxiv.org
Video-driven neural face reenactment aims to synthesize realistic facial images that
successfully preserve the identity and appearance of a source face, while transferring the …

"Just To See You Smile": SMILEY, a Voice-Guided GUY GAN

Q Yang, C Tzelepis, S Nikolenko, I Patras… - Proceedings of the …, 2023 - dl.acm.org
In this technical demonstration, we present SMILEY, a voice-guided virtual assistant. The
system utilizes a deep neural architecture ContraCLIP to manipulate facial attributes using …