In this paper, we are interested in understanding how customers perceive fashion recommendations, in particular when observing a proposed combination of garments to compose an outfit. Automatically understanding how a suggested item is perceived, without any kind of active engagement, is in fact an essential block to achieve interactive applications. We propose a pixel-landmark mutual enhanced framework for implicit preference estimation, named PLM-IPE, which is capable of inferring the user's implicit preferences exploiting visual cues, without any active or conscious engagement. PLM-IPE consists of three key modules: pixel-based estimator, landmark-based estimator and mutual learning based optimization. The former two modules work on capturing the implicit reaction of the user from the pixel level and landmark level, respectively. The last module serves to transfer knowledge between the two parallel estimators. Towards evaluation, we collected a real-world dataset, named SentiGarment, which contains 3,345 facial reaction videos paired with suggested outfits and human labeled reaction scores. Extensive experiments show the superiority of our model over state-of-the-art approaches.

PLM-IPE: A Pixel-Landmark Mutual Enhanced Framework for Implicit Preference Estimation / Becattini, F.; Song, X.; Baecchi, C.; Fang, S. -T.; Ferrari, C.; Nie, L.; Del Bimbo, A.. - (2021), pp. 42.1-42.5. (Intervento presentato al convegno 3rd ACM International Conference on Multimedia in Asia, MMAsia 2021 tenutosi a aus nel 2021) [10.1145/3469877.3490621].

PLM-IPE: A Pixel-Landmark Mutual Enhanced Framework for Implicit Preference Estimation

Ferrari C.;
2021-01-01

Abstract

In this paper, we are interested in understanding how customers perceive fashion recommendations, in particular when observing a proposed combination of garments to compose an outfit. Automatically understanding how a suggested item is perceived, without any kind of active engagement, is in fact an essential block to achieve interactive applications. We propose a pixel-landmark mutual enhanced framework for implicit preference estimation, named PLM-IPE, which is capable of inferring the user's implicit preferences exploiting visual cues, without any active or conscious engagement. PLM-IPE consists of three key modules: pixel-based estimator, landmark-based estimator and mutual learning based optimization. The former two modules work on capturing the implicit reaction of the user from the pixel level and landmark level, respectively. The last module serves to transfer knowledge between the two parallel estimators. Towards evaluation, we collected a real-world dataset, named SentiGarment, which contains 3,345 facial reaction videos paired with suggested outfits and human labeled reaction scores. Extensive experiments show the superiority of our model over state-of-the-art approaches.
2021
9781450386074
PLM-IPE: A Pixel-Landmark Mutual Enhanced Framework for Implicit Preference Estimation / Becattini, F.; Song, X.; Baecchi, C.; Fang, S. -T.; Ferrari, C.; Nie, L.; Del Bimbo, A.. - (2021), pp. 42.1-42.5. (Intervento presentato al convegno 3rd ACM International Conference on Multimedia in Asia, MMAsia 2021 tenutosi a aus nel 2021) [10.1145/3469877.3490621].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2924108
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
social impact