This paper presents a novel approach for generating 3D talking heads from raw audio inputs. Our method grounds on the idea that speech related movements can be comprehensively and efficiently described by the motion of a few control points located on the movable parts of the face, i.e., landmarks. The underlying musculoskeletal structure then allows us to learn how their motion influences the geometrical deformations of the whole face. The proposed method employs two distinct models to this aim: the first one learns to generate the motion of a sparse set of landmarks from the given audio. The second model expands such landmarks motion to a dense motion field, which is utilized to animate a given 3D mesh in neutral state. Additionally, we introduce a novel loss function, named Cosine Loss, which minimizes the angle between the generated motion vectors and the ground truth ones. Using landmarks in 3D talking head generation offers various advantages such as consistency, reliability, and obviating the need for manual-annotation. Our approach is designed to be identity-agnostic, enabling high-quality facial animations for any users without additional data or training. Code and models are available at: S2L+S2D.

Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation / Nocentini, F.; Ferrari, C.; Berretti, S.. - 14233:(2023), pp. 340-351. (Intervento presentato al convegno Proceedings of the 22nd International Conference on Image Analysis and Processing, ICIAP 2023 tenutosi a ita nel 2023) [10.1007/978-3-031-43148-7_29].

Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation

Ferrari C.;
2023-01-01

Abstract

This paper presents a novel approach for generating 3D talking heads from raw audio inputs. Our method grounds on the idea that speech related movements can be comprehensively and efficiently described by the motion of a few control points located on the movable parts of the face, i.e., landmarks. The underlying musculoskeletal structure then allows us to learn how their motion influences the geometrical deformations of the whole face. The proposed method employs two distinct models to this aim: the first one learns to generate the motion of a sparse set of landmarks from the given audio. The second model expands such landmarks motion to a dense motion field, which is utilized to animate a given 3D mesh in neutral state. Additionally, we introduce a novel loss function, named Cosine Loss, which minimizes the angle between the generated motion vectors and the ground truth ones. Using landmarks in 3D talking head generation offers various advantages such as consistency, reliability, and obviating the need for manual-annotation. Our approach is designed to be identity-agnostic, enabling high-quality facial animations for any users without additional data or training. Code and models are available at: S2L+S2D.
2023
9783031431470
9783031431487
Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation / Nocentini, F.; Ferrari, C.; Berretti, S.. - 14233:(2023), pp. 340-351. (Intervento presentato al convegno Proceedings of the 22nd International Conference on Image Analysis and Processing, ICIAP 2023 tenutosi a ita nel 2023) [10.1007/978-3-031-43148-7_29].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2988115
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
social impact