Transfer learning and fine-tuning for facial expression recognition with class balancing

 

Gardado en:
Detalles Bibliográficos
Autores: Ruzicka, Josef, Lara Petitdemange, Adrián
Formato: comunicación de congreso
Data de Publicación:2024
Descripción:Facial expression recognition benefits from deep learning models because of their ability to automatically extract features. However, these models face three important challenges: first, training tends to take longer times than with traditional machine learning models. Second, obtaining and labeling enough data samples can become a heavy burden due to the feature complexity usually involved in these problems. Third, it is also common to face class imbalance challenges. In this paper, we address these challenges by implementing transfer learning, oversampling and fine tuning to a facial expression recognition use case. Combining transfer learning with the use of a GPU helped us complete the training for our models in just about one hour. Furthermore, we achieved a 65.75% accuracy with one of the models. We provide measurements for metrics that are helpful when dealing with imbalanced data to assess that the models are not biased like precision, recall, F1 score and loss.
País:Kérwá
Institución:Universidad de Costa Rica
Repositorio:Kérwá
Idioma:Inglés
OAI Identifier:oai:kerwa.ucr.ac.cr:10669/101862
Acceso en liña:https://hdl.handle.net/10669/101862
https://doi.org/10.1109/CLEI64178.2024.10700478
Palabra crave:transfer learning
facial expression recognition
fine-tuning
oversampling