Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis

A Sauer, T Karras, S Laine… - … on machine learning, 2023 - proceedings.mlr.press
Text-to-image synthesis has recently seen significant progress thanks to large pretrained
language models, large-scale training data, and the introduction of scalable model families …

Geometry processing with neural fields

G Yang, S Belongie, B Hariharan… - Advances in Neural …, 2021 - proceedings.neurips.cc
Most existing geometry processing algorithms use meshes as the default shape
representation. Manipulating meshes, however, requires one to maintain high quality in the …

Polynomial neural fields for subband decomposition and manipulation

G Yang, S Benaim, V Jampani… - Advances in …, 2022 - proceedings.neurips.cc
Neural fields have emerged as a new paradigm for representing signals, thanks to their
ability to do it compactly while being easy to optimize. In most applications, however, neural …

The neural process family: Survey, applications and perspectives

S Jha, D Gong, X Wang, RE Turner, L Yao - arXiv preprint arXiv …, 2022 - arxiv.org
The standard approaches to neural network implementation yield powerful function
approximation capabilities but are limited in their abilities to learn meta representations and …

Multilinear operator networks

Y Cheng, GG Chrysos, M Georgopoulos… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite the remarkable capabilities of deep neural networks in image recognition, the
dependence on activation functions remains a largely unexplored area and has yet to be …

Extrapolation and spectral bias of neural nets with hadamard product: a polynomial net study

Y Wu, Z Zhu, F Liu, G Chrysos… - Advances in neural …, 2022 - proceedings.neurips.cc
Neural tangent kernel (NTK) is a powerful tool to analyze training dynamics of neural
networks and their generalization bounds. The study on NTK has been devoted to typical …

Augmenting deep classifiers with polynomial neural networks

GG Chrysos, M Georgopoulos, J Deng… - … on Computer Vision, 2022 - Springer
Deep neural networks have been the driving force behind the success in classification tasks,
eg, object and audio recognition. Impressive results and generalization have been achieved …

On contrastive representations of stochastic processes

E Mathieu, A Foster, Y Teh - Advances in Neural …, 2021 - proceedings.neurips.cc
Learning representations of stochastic processes is an emerging problem in machine
learning with applications from meta-learning to physical object models to time series …

MIGS: Multi-Identity Gaussian Splatting via Tensor Decomposition

A Chatziagapi, GG Chrysos, D Samaras - European Conference on …, 2024 - Springer
Abstract We introduce MIGS (Multi-Identity Gaussian Splatting), a novel method that learns a
single neural representation for multiple identities, using only monocular videos. Recent 3D …

MI-NeRF: Learning a Single Face NeRF from Multiple Identities

A Chatziagapi, GG Chrysos, D Samaras - arXiv preprint arXiv:2403.19920, 2024 - arxiv.org
In this work, we introduce a method that learns a single dynamic neural radiance field
(NeRF) from monocular talking face videos of multiple identities. NeRFs have shown …