Deep learning architectures suffer from depth-related performance degradation, limiting the effective depth of neural networks. Approaches like ResNet are able to mitigate this, but they do not completely eliminate the problem. We introduce Global Neural Networks (GloNet), a novel architecture overcoming depth-related issues, designed to be superimposed on any model, enhancing its depth without increasing complexity or reducing performance. With GloNet, the network’s head uniformly receives information from all parts of the network, regardless of their level of abstraction. This enables GloNet to self-regulate information flow during training, reducing the influence of less effective deeper layers, and allowing for stable training irrespective of network depth. This paper details GloNet’s design, its theoretical basis, and a comparison with existing similar architectures. Experiments show GloNet’s capability to self-regulate, and its resilience to depth-related learning challenges, such as performance degradation. Our findings position GloNet as a viable alternative to traditional architectures like ResNets.

GloNets: Globally Connected Neural Networks / Di Cecco, A.; Metta, C.; Fantozzi, M.; Morandin, F.; Parton, M.. - ELETTRONICO. - 14641:(2024), pp. 53-64. (Intervento presentato al convegno 22nd International Symposium on Intelligent Data Analysis, IDA 2024 tenutosi a Stockholm, Sweden nel 2024) [10.1007/978-3-031-58547-0_5].

GloNets: Globally Connected Neural Networks

Metta C.;Fantozzi M.;Morandin F.;Parton M.
2024-01-01

Abstract

Deep learning architectures suffer from depth-related performance degradation, limiting the effective depth of neural networks. Approaches like ResNet are able to mitigate this, but they do not completely eliminate the problem. We introduce Global Neural Networks (GloNet), a novel architecture overcoming depth-related issues, designed to be superimposed on any model, enhancing its depth without increasing complexity or reducing performance. With GloNet, the network’s head uniformly receives information from all parts of the network, regardless of their level of abstraction. This enables GloNet to self-regulate information flow during training, reducing the influence of less effective deeper layers, and allowing for stable training irrespective of network depth. This paper details GloNet’s design, its theoretical basis, and a comparison with existing similar architectures. Experiments show GloNet’s capability to self-regulate, and its resilience to depth-related learning challenges, such as performance degradation. Our findings position GloNet as a viable alternative to traditional architectures like ResNets.
2024
978-3-031-58547-0
GloNets: Globally Connected Neural Networks / Di Cecco, A.; Metta, C.; Fantozzi, M.; Morandin, F.; Parton, M.. - ELETTRONICO. - 14641:(2024), pp. 53-64. (Intervento presentato al convegno 22nd International Symposium on Intelligent Data Analysis, IDA 2024 tenutosi a Stockholm, Sweden nel 2024) [10.1007/978-3-031-58547-0_5].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/3002855
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact