A linear frequency principle model to understand the absence of overfitting in neural networks

Y Zhang, T Luo, Z Ma, ZQJ Xu - Chinese Physics Letters, 2021 - iopscience.iop.org
Y Zhang, T Luo, Z Ma, ZQJ Xu
Chinese Physics Letters, 2021iopscience.iop.org
Why heavily parameterized neural networks (NNs) do not overfit the data is an important
long standing open question. We propose a phenomenological model of the NN training to
explain this non-overfitting puzzle. Our linear frequency principle (LFP) model accounts for a
key dynamical feature of NNs: they learn low frequencies first, irrespective of microscopic
details. Theory based on our LFP model shows that low frequency dominance of target
functions is the key condition for the non-overfitting of NNs and is verified by experiments …
Abstract
Why heavily parameterized neural networks (NNs) do not overfit the data is an important long standing open question. We propose a phenomenological model of the NN training to explain this non-overfitting puzzle. Our linear frequency principle (LFP) model accounts for a key dynamical feature of NNs: they learn low frequencies first, irrespective of microscopic details. Theory based on our LFP model shows that low frequency dominance of target functions is the key condition for the non-overfitting of NNs and is verified by experiments. Furthermore, through an ideal two-layer NN, we unravel how detailed microscopic NN training dynamics statistically gives rise to an LFP model with quantitative prediction power.
iopscience.iop.org
以上显示的是最相近的搜索结果。 查看全部搜索结果