Facial Action Units (AUs) correspond to the deformation/contraction of individual or combinations of facial muscles. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Analysing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for AUs detection by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, that mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a learning phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of a same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used to train an AU classifier. We experimented the proposed approach in a difficult cross-dataset experiment, where the 3DMM is constructed on one dataset (BU-3DFE) and tested on a different one (Bosphorus). Results evidence that effective AU detection is obtained by SVM learning of deformation coefficients from a small training set.

Learning 3DMM Deformation Coefficients for Action Unit Detection / Ariano, L.; Ferrari, C.; Berretti, S.. - 1366:(2021), pp. 1-14. (Intervento presentato al convegno 2nd Symposium on Machine Learning and Metaheuristics Algorithms, and Applications, SoMMA 2020 tenutosi a ind nel 2020) [10.1007/978-981-16-0419-5_1].

Learning 3DMM Deformation Coefficients for Action Unit Detection

Ferrari C.;
2021-01-01

Abstract

Facial Action Units (AUs) correspond to the deformation/contraction of individual or combinations of facial muscles. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Analysing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for AUs detection by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, that mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a learning phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of a same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used to train an AU classifier. We experimented the proposed approach in a difficult cross-dataset experiment, where the 3DMM is constructed on one dataset (BU-3DFE) and tested on a different one (Bosphorus). Results evidence that effective AU detection is obtained by SVM learning of deformation coefficients from a small training set.
2021
978-981-16-0418-8
978-981-16-0419-5
Learning 3DMM Deformation Coefficients for Action Unit Detection / Ariano, L.; Ferrari, C.; Berretti, S.. - 1366:(2021), pp. 1-14. (Intervento presentato al convegno 2nd Symposium on Machine Learning and Metaheuristics Algorithms, and Applications, SoMMA 2020 tenutosi a ind nel 2020) [10.1007/978-981-16-0419-5_1].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2905641
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact