Learning representations that support extrapolation

T Webb, Z Dulberg, S Frankland… - International …, 2020 - proceedings.mlr.press
International conference on machine learning, 2020proceedings.mlr.press
Extrapolation–the ability to make inferences that go beyond the scope of one's experiences–
is a hallmark of human intelligence. By contrast, the generalization exhibited by
contemporary neural network algorithms is largely limited to interpolation between data
points in their training corpora. In this paper, we consider the challenge of learning
representations that support extrapolation. We introduce a novel visual analogy benchmark
that allows the graded evaluation of extrapolation as a function of distance from the convex …
Abstract
Extrapolation–the ability to make inferences that go beyond the scope of one’s experiences–is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果