Understanding the glassy nature of neural networks is pivotal both for theoretical and computational advances in Machine Learning and Theoretical Artificial Intelligence. Keeping the focus on dense associative Hebbian neural networks (i.e. Hopfield networks with polynomial interactions of even degree P> 2), the purpose of this paper is twofold: at first we develop rigorous mathematical approaches to address properly a statistical mechanical picture of the phenomenon of replica symmetry breaking (RSB) in these networks, then—deepening results stemmed via these routes—we aim to inspect the glassiness that they hide. In particular, regarding the methodology, we provide two techniques: the former (closer to mathematical physics in spirit) is an adaptation of the transport PDE to this case, while the latter (more probabilistic in its nature) is an extension of Guerra’s interpolation breakthrough. Beyond coherence among the results, either in replica symmetric and in the one-step replica symmetry breaking level of description, we prove the Gardner’s picture (heuristically achieved through the replica trick) and we identify the maximal storage capacity by a ground-state analysis in the Baldi-Venkatesh high-storage regime. In the second part of the paper we investigate the glassy structure of these networks: at difference with the replica symmetric scenario (RS), RSB actually stabilizes the spin-glass phase. We report huge differences w.r.t. the standard pairwise Hopfield limit: in particular, it is known that it is possible to express the free energy of the Hopfield neural network (and, in a cascade fashion, all its properties) as a linear combination of the free energies of a hard spin glass (i.e. the Sherrington–Kirkpatrick model) and a soft spin glass (the Gaussian or ”spherical” model). While this continues to hold also in the first step of RSB for the Hopfield model, this is no longer true when interactions are more than pairwise (whatever the level of description, RS or RSB). For dense networks solely the free energy of the hard spin glass survives. As the Sherrington–Kirkpatrick spin glass is full-RSB (i.e. Parisi theory holds for that model), while the Gaussian spin-glass is replica symmetric, these different representation theorems prove a huge diversity in the underlying glassiness of associative neural networks.

Replica Symmetry Breaking in Dense Hebbian Neural Networks

Albanese L.;Alessandrelli A.;Barra A.
2022-01-01

Abstract

Understanding the glassy nature of neural networks is pivotal both for theoretical and computational advances in Machine Learning and Theoretical Artificial Intelligence. Keeping the focus on dense associative Hebbian neural networks (i.e. Hopfield networks with polynomial interactions of even degree P> 2), the purpose of this paper is twofold: at first we develop rigorous mathematical approaches to address properly a statistical mechanical picture of the phenomenon of replica symmetry breaking (RSB) in these networks, then—deepening results stemmed via these routes—we aim to inspect the glassiness that they hide. In particular, regarding the methodology, we provide two techniques: the former (closer to mathematical physics in spirit) is an adaptation of the transport PDE to this case, while the latter (more probabilistic in its nature) is an extension of Guerra’s interpolation breakthrough. Beyond coherence among the results, either in replica symmetric and in the one-step replica symmetry breaking level of description, we prove the Gardner’s picture (heuristically achieved through the replica trick) and we identify the maximal storage capacity by a ground-state analysis in the Baldi-Venkatesh high-storage regime. In the second part of the paper we investigate the glassy structure of these networks: at difference with the replica symmetric scenario (RS), RSB actually stabilizes the spin-glass phase. We report huge differences w.r.t. the standard pairwise Hopfield limit: in particular, it is known that it is possible to express the free energy of the Hopfield neural network (and, in a cascade fashion, all its properties) as a linear combination of the free energies of a hard spin glass (i.e. the Sherrington–Kirkpatrick model) and a soft spin glass (the Gaussian or ”spherical” model). While this continues to hold also in the first step of RSB for the Hopfield model, this is no longer true when interactions are more than pairwise (whatever the level of description, RS or RSB). For dense networks solely the free energy of the hard spin glass survives. As the Sherrington–Kirkpatrick spin glass is full-RSB (i.e. Parisi theory holds for that model), while the Gaussian spin-glass is replica symmetric, these different representation theorems prove a huge diversity in the underlying glassiness of associative neural networks.
File in questo prodotto:
File Dimensione Formato  
RSB_2022.pdf

solo utenti autorizzati

Tipologia: Versione editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 865.14 kB
Formato Adobe PDF
865.14 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11587/488908
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 7
social impact