The majority of recent face recognition systems are based on Deep Convolutional Neural Networks (DCNNs). These networks are trained on massive amounts of face images so as to learn a compact representation (deep descriptor) aimed at capturing the identity information. Recognition is then performed by computing some similarity (or distance) measure between descriptors. However, in practice, descriptors encode also other intra-class variabilities such as pose and expressions. This well-known problem is usually addressed by designing specific loss-functions or metric learning modules such that the learned descriptors maximize the inter-class (identity) distances and minimize the intra-class differences in the feature space. We tackle this problem from a different perspective by observing that descriptors associated with images of the same subject, on average, share similar patterns in the highest activation units. We demonstrate this assumption by showing that improved accuracy can be obtained in a template-based recognition scenario by retaining the descriptor bins with the average highest activation, and dropping all the others to zero. These activation patterns are also employed to build identity-representative binary masks that are effectively used in place of the descriptors to match templates. We investigate this strategy by performing experiments on the IJB-A dataset, and show that it can significantly boost the recognition accuracy.

Discovering Identity Specific Activation Patterns in Deep Descriptors for Template Based Face Recognition / Ferrari, Claudio; Berretti, Stefano; Del Bimbo, Alberto. - ELETTRONICO. - (2019), pp. 1-5. ((Intervento presentato al convegno IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) tenutosi a Lille, France nel 14-18 May, 2019 [10.1109/FG.2019.8756604].

Discovering Identity Specific Activation Patterns in Deep Descriptors for Template Based Face Recognition

Claudio Ferrari;
2019

Abstract

The majority of recent face recognition systems are based on Deep Convolutional Neural Networks (DCNNs). These networks are trained on massive amounts of face images so as to learn a compact representation (deep descriptor) aimed at capturing the identity information. Recognition is then performed by computing some similarity (or distance) measure between descriptors. However, in practice, descriptors encode also other intra-class variabilities such as pose and expressions. This well-known problem is usually addressed by designing specific loss-functions or metric learning modules such that the learned descriptors maximize the inter-class (identity) distances and minimize the intra-class differences in the feature space. We tackle this problem from a different perspective by observing that descriptors associated with images of the same subject, on average, share similar patterns in the highest activation units. We demonstrate this assumption by showing that improved accuracy can be obtained in a template-based recognition scenario by retaining the descriptor bins with the average highest activation, and dropping all the others to zero. These activation patterns are also employed to build identity-representative binary masks that are effectively used in place of the descriptors to match templates. We investigate this strategy by performing experiments on the IJB-A dataset, and show that it can significantly boost the recognition accuracy.
978-1-7281-0089-0
Discovering Identity Specific Activation Patterns in Deep Descriptors for Template Based Face Recognition / Ferrari, Claudio; Berretti, Stefano; Del Bimbo, Alberto. - ELETTRONICO. - (2019), pp. 1-5. ((Intervento presentato al convegno IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) tenutosi a Lille, France nel 14-18 May, 2019 [10.1109/FG.2019.8756604].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2900780
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 2
social impact