Task structure and nonlinearity jointly determine learned representational geometry

M Alleman, JW Lindsey, S Fusi - arXiv preprint arXiv:2401.13558, 2024 - arxiv.org
The utility of a learned neural representation depends on how well its geometry supports
performance in downstream tasks. This geometry depends on the structure of the inputs, the
structure of the target outputs, and the architecture of the network. By studying the learning
dynamics of networks with one hidden layer, we discovered that the network's activation
function has an unexpectedly strong impact on the representational geometry: Tanh
networks tend to learn representations that reflect the structure of the target outputs, while …

[引用][C] Task structure and nonlinearity jointly determine learned representational geometry. arXiv

M Alleman, JW Lindsey, S Fusi - arXiv preprint arXiv:2401.13558, 2024
以上显示的是最相近的搜索结果。 查看全部搜索结果