Analysis of facial expressions is a task of increasing interest in Computer Vision, with many potential applications. However, collecting images with labeled expression for many subjects is a quite complicated operation. In this paper, we propose a solution that use a particular 3D morphable model (3DMM) that, starting from a neutral image of a target subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the fact the used 3DMM can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters that are learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. In the experiments, we demonstrate that such deformation parameters can be learned even from a small set of training data using simple statistical tools; despite this simplicity, we show that very realistic subject-dependent expression renderings can be obtained with our method. Furthermore, robustness to cross dataset tests is also evidenced.

Learning 3DMM Deformation Coefficients for Rendering Realistic Expression Images / Ferrari, Claudio; Berretti, Stefano; Pala, Pietro; Del Bimbo, Alberto. - ELETTRONICO. - (2018), pp. 1-14. (Intervento presentato al convegno Interantional Conference on Smart Multimedia tenutosi a Tolone, Francia nel 24-25 Agosto 2018).

Learning 3DMM Deformation Coefficients for Rendering Realistic Expression Images

Claudio Ferrari;
2018-01-01

Abstract

Analysis of facial expressions is a task of increasing interest in Computer Vision, with many potential applications. However, collecting images with labeled expression for many subjects is a quite complicated operation. In this paper, we propose a solution that use a particular 3D morphable model (3DMM) that, starting from a neutral image of a target subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the fact the used 3DMM can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters that are learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. In the experiments, we demonstrate that such deformation parameters can be learned even from a small set of training data using simple statistical tools; despite this simplicity, we show that very realistic subject-dependent expression renderings can be obtained with our method. Furthermore, robustness to cross dataset tests is also evidenced.
2018
Learning 3DMM Deformation Coefficients for Rendering Realistic Expression Images / Ferrari, Claudio; Berretti, Stefano; Pala, Pietro; Del Bimbo, Alberto. - ELETTRONICO. - (2018), pp. 1-14. (Intervento presentato al convegno Interantional Conference on Smart Multimedia tenutosi a Tolone, Francia nel 24-25 Agosto 2018).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2900777
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact