首页 正文

Kernel shape renormalization explains output-output correlations in finite Bayesian one-hidden-layer networks

{{output}}
Finite-width one hidden layer networks with multiple neurons in the readout layer display nontrivial output-output correlations that vanish in the lazy-training infinite-width limit. In this manuscript we leverage recent progress in the proportional limit of B... ...