Supervised contrastive learning with identity-label embeddings for facial action unit recognition

Lian, T, Adama, D ORCID logoORCID: https://orcid.org/0000-0002-2650-857X, Machado, P ORCID logoORCID: https://orcid.org/0000-0003-1760-3871 and Vinkemeier, D ORCID logoORCID: https://orcid.org/0000-0001-8767-4355, 2023. Supervised contrastive learning with identity-label embeddings for facial action unit recognition. In: 34th British Machine Vision Conference proceedings. British Machine Vision Association.

[thumbnail of 1839504_Vinkemeier.pdf]
Preview
Text
1839504_Vinkemeier.pdf - Published version

Download (855kB) | Preview

Abstract

Facial expression analysis is a crucial area of research for understanding human emotions. One important approach to this is the automatic detection of facial action units (AUs), which are small, visible changes in facial appearance. Despite extensive research, automatic AU detection remains a challenging computer vision problem. This paper addresses two central difficulties: the first is the inherent differences in facial behaviour and appearance across individuals, which leads current AU recognition models to overfit subjects in the training set and generalize poorly to unseen subjects; the second is representing the complex interactions among different AUs. In this paper, we propose a novel two-stage training framework, called CL-ILE, to address these long-standing challenges. In the first stage of CL-ILE, we introduce identity-label embeddings (ILEs) to train an ID feature encoder capable of generating person-specific feature embeddings for input face images. In the second stage, we present a data-driven method that implicitly models the relationships among AUs using contrastive loss in a supervised setting while eliminating the person-specific features generated by the first stage to enhance generalizability. By removing the ID feature encoder and ILEs from the first stage after training, CL-ILE becomes more lightweight and readily applicable to real-world applications than models using graph-based structures. We evaluate our approach on two widely-used AU recognition datasets, BP4D and DISFA, demonstrating that CL-ILE can achieve state-of-the-art performance on the F1 score.

Item Type: Chapter in book
Creators: Lian, T., Adama, D., Machado, P. and Vinkemeier, D.
Publisher: British Machine Vision Association
Date: 20 November 2023
Identifiers:
Number
Type
1839504
Other
Divisions: Schools > School of Science and Technology
Record created by: Jonathan Gallacher
Date Added: 21 Dec 2023 10:11
Last Modified: 21 Dec 2023 10:11
URI: https://irep.ntu.ac.uk/id/eprint/50587

Actions (login required)

Edit View Edit View

Statistics

Views

Views per month over past year

Downloads

Downloads per month over past year