Transfer learning and fine-tuning for facial expression recognition with class balancing
Guardado en:
Autores: | , |
---|---|
格式: | comunicación de congreso |
Fecha de Publicación: | 2024 |
实物特征: | Facial expression recognition benefits from deep learning models because of their ability to automatically extract features. However, these models face three important challenges: first, training tends to take longer times than with traditional machine learning models. Second, obtaining and labeling enough data samples can become a heavy burden due to the feature complexity usually involved in these problems. Third, it is also common to face class imbalance challenges. In this paper, we address these challenges by implementing transfer learning, oversampling and fine tuning to a facial expression recognition use case. Combining transfer learning with the use of a GPU helped us complete the training for our models in just about one hour. Furthermore, we achieved a 65.75% accuracy with one of the models. We provide measurements for metrics that are helpful when dealing with imbalanced data to assess that the models are not biased like precision, recall, F1 score and loss. |
País: | Kérwá |
机构: | Universidad de Costa Rica |
Repositorio: | Kérwá |
语言: | Inglés |
OAI Identifier: | oai:kerwa.ucr.ac.cr:10669/101862 |
在线阅读: | https://hdl.handle.net/10669/101862 https://doi.org/10.1109/CLEI64178.2024.10700478 |
Palabra clave: | transfer learning facial expression recognition fine-tuning oversampling |