In this paper we describe OSLVQ (Optimum-Size Learning Vector Quantization), an algorithm for training learning vector quantization (LVQ) classifiers that achieves effective sizing of networks through a multi-step procedure. In each step of the algorithm the network is first trained with a few iterations of one of the LVQ algorithms. After this partial training the structure of the network is updated according to the performances achieved in classifying the training set: we add neurons whose weight vectors are enclosed in regions of the pattern space where several misclassified patterns are found, while we remove neurons which are activated by too few training patterns. Neurons are also removed if their presence is redundant, i.e. when, in their absence, other neurons representing the same class would respond to the same patterns. Results obtained on a set of patterns representing phonemes are reported and compared with the ones achieved by a standard-LVQ classifier of similar size.

OSLVQ: a training strategy for optimum-size learning vector quantization classifiers / S. Cagnoni; G. Valli. - STAMPA. - (1994), pp. 762-765. ((Intervento presentato al convegno IEEE International Conference on Neural Networks tenutosi a Orlando, FL nel 27-29/6/1994 [10.1109/ICNN.1994.374273].

OSLVQ: a training strategy for optimum-size learning vector quantization classifiers

CAGNONI, Stefano;
1994

Abstract

In this paper we describe OSLVQ (Optimum-Size Learning Vector Quantization), an algorithm for training learning vector quantization (LVQ) classifiers that achieves effective sizing of networks through a multi-step procedure. In each step of the algorithm the network is first trained with a few iterations of one of the LVQ algorithms. After this partial training the structure of the network is updated according to the performances achieved in classifying the training set: we add neurons whose weight vectors are enclosed in regions of the pattern space where several misclassified patterns are found, while we remove neurons which are activated by too few training patterns. Neurons are also removed if their presence is redundant, i.e. when, in their absence, other neurons representing the same class would respond to the same patterns. Results obtained on a set of patterns representing phonemes are reported and compared with the ones achieved by a standard-LVQ classifier of similar size.
078031901X
File in questo prodotto:
File Dimensione Formato  
00374273.pdf

non disponibili

Tipologia: Abstract
Licenza: Creative commons
Dimensione 376.04 kB
Formato Adobe PDF
376.04 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11381/2652072
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 2
social impact