Automatic analysis of facial expressions is now attracting an increasing interest, thanks to the many potential applications it can enable. However, collecting images with labeled expression for large sets of images or videos is a quite complicated operation that, in most of the cases, requires substantial human intervention. In this paper, we propose a solution that, starting from a neutral image of a subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the use of a particular 3D morphable model (3DMM) that can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. Experiments demonstrate that such deformation parameters can be learned from a small set of training data using simple statistical tools; despite this simplicity, very realistic subject-dependent expression renderings can be obtained. Furthermore, robustness to cross dataset tests is also evidenced.

Rendering Realistic Subject-Dependent Expression Images by Learning 3DMM Deformation Coefficients / Ferrari, Claudio; Berretti, Stefano; Pala, Pietro; Del Bimbo, Alberto. - ELETTRONICO. - (2018), pp. 1-16. (Intervento presentato al convegno 9th International Workshop on Human Behavior Understanding - Generating Visual Data of Human Behavior (HBUGEN’18) tenutosi a Monaco, Germania nel 9 Settembre 2018).

Rendering Realistic Subject-Dependent Expression Images by Learning 3DMM Deformation Coefficients

Claudio Ferrari;
2018-01-01

Abstract

Automatic analysis of facial expressions is now attracting an increasing interest, thanks to the many potential applications it can enable. However, collecting images with labeled expression for large sets of images or videos is a quite complicated operation that, in most of the cases, requires substantial human intervention. In this paper, we propose a solution that, starting from a neutral image of a subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the use of a particular 3D morphable model (3DMM) that can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. Experiments demonstrate that such deformation parameters can be learned from a small set of training data using simple statistical tools; despite this simplicity, very realistic subject-dependent expression renderings can be obtained. Furthermore, robustness to cross dataset tests is also evidenced.
2018
Rendering Realistic Subject-Dependent Expression Images by Learning 3DMM Deformation Coefficients / Ferrari, Claudio; Berretti, Stefano; Pala, Pietro; Del Bimbo, Alberto. - ELETTRONICO. - (2018), pp. 1-16. (Intervento presentato al convegno 9th International Workshop on Human Behavior Understanding - Generating Visual Data of Human Behavior (HBUGEN’18) tenutosi a Monaco, Germania nel 9 Settembre 2018).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2900772
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact