Although neural networks have seen tremendous success as predictive models in a variety of domains, they can be overly confident in their predictions on out-of-distribution (OOD) …
In large deep neural networks that seem to perform surprisingly well on many tasks, we also observe a few failures related to accuracy, social biases, and alignment with human values …
Discrete variational auto-encoders (VAEs) are able to represent semantic latent spaces in generative learning. In many real-life settings, the discrete latent space consists of high …
SoftMax is a ubiquitous ingredient of modern machine learning algorithms. It maps an input vector onto a probability simplex and reweights the input by concentrating the probability …
Neural network models have become ubiquitous in the machine learning literature. These models are compositions of differentiable building blocks that compute dense …
This paper focuses on continual meta-learning, where few-shot tasks are sequentially available and sampled from a non-stationary distribution. Motivated by this challenging …
Q Lv, L Geng, Z Cao, M Cao, S Li, W Li, G Fu - openreview.net
Softmax with the cross entropy loss is the standard configuration for current neural text classification models. The gold score for a target class is supposed to be 1, but it is never …