Finite-width one hidden layer networks with multiple neurons in the readout layer display nontrivial outputoutput correlations that vanish in the lazy-training infinite-width limit. In this manuscript we leverage recent progress in the proportional limit of Bayesian deep learning (that is the limit where the size of the training set P, and the width of the hidden layers N are taken to infinity keeping their ratio alpha = P/N finite) to rationalize this empirical evidence. In particular, we show that output-output correlations in finite fully connected networks are taken into account by a kernel shape renormalization of the infinite-width NNGP kernel, which naturally arises in the proportional limit. We perform accurate numerical experiments both to assess the predictive power of the Bayesian framework in terms of generalization, and to quantify output-output correlations in finite-width networks. By quantitatively matching our predictions with the observed correlations, we provide additional evidence that kernel shape renormalization is instrumental to explain the phenomenology observed in finite Bayesian one-hidden-layer networks.

Kernel shape renormalization explains output-output correlations in finite Bayesian one-hidden-layer networks / Baglioni, P.; Giambagli, L.; Vezzani, A.; Burioni, R.; Rotondo, P.; Pacelli, R.. - In: PHYSICAL REVIEW. E. - ISSN 2470-0045. - 111:6-2(2025). [10.1103/9pkk-d4bm]

Kernel shape renormalization explains output-output correlations in finite Bayesian one-hidden-layer networks

Baglioni P.;Burioni R.;Rotondo P.;
2025-01-01

Abstract

Finite-width one hidden layer networks with multiple neurons in the readout layer display nontrivial outputoutput correlations that vanish in the lazy-training infinite-width limit. In this manuscript we leverage recent progress in the proportional limit of Bayesian deep learning (that is the limit where the size of the training set P, and the width of the hidden layers N are taken to infinity keeping their ratio alpha = P/N finite) to rationalize this empirical evidence. In particular, we show that output-output correlations in finite fully connected networks are taken into account by a kernel shape renormalization of the infinite-width NNGP kernel, which naturally arises in the proportional limit. We perform accurate numerical experiments both to assess the predictive power of the Bayesian framework in terms of generalization, and to quantify output-output correlations in finite-width networks. By quantitatively matching our predictions with the observed correlations, we provide additional evidence that kernel shape renormalization is instrumental to explain the phenomenology observed in finite Bayesian one-hidden-layer networks.
2025
Kernel shape renormalization explains output-output correlations in finite Bayesian one-hidden-layer networks / Baglioni, P.; Giambagli, L.; Vezzani, A.; Burioni, R.; Rotondo, P.; Pacelli, R.. - In: PHYSICAL REVIEW. E. - ISSN 2470-0045. - 111:6-2(2025). [10.1103/9pkk-d4bm]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/3031160
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact