D Bishara, Y Xie, WK Liu, S Li - Archives of computational methods in …, 2023 - Springer
Multiscale simulation and homogenization of materials have become the major computational technology as well as engineering tools in material modeling and material …
L Wu, J Li, Y Wang, Q Meng, T Qin… - Advances in …, 2021 - proceedings.neurips.cc
Dropout is a powerful and widely used technique to regularize the training of deep neural networks. Though effective and performing well, the randomness introduced by dropout …
Multi-horizon forecasting often contains a complex mix of inputs–including static (ie time- invariant) covariates, known future inputs, and other exogenous time series that are only …
DP Kingma, M Welling - Foundations and Trends® in …, 2019 - nowpublishers.com
An Introduction to Variational Autoencoders Page 1 An Introduction to Variational Autoencoders Page 2 Other titles in Foundations and Trends R in Machine Learning Computational Optimal …
S Bai, JZ Kolter, V Koltun - Advances in neural information …, 2019 - proceedings.neurips.cc
We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence …
When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen …
G Ghiasi, TY Lin, QV Le - Advances in neural information …, 2018 - proceedings.neurips.cc
Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although …
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's …
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language …