Transfer learning and fine-tuning for facial expression recognition with class balancing

 

保存先:
書誌詳細
著者: Ruzicka, Josef, Lara Petitdemange, Adrián
フォーマット: comunicación de congreso
出版日付:2024
その他の書誌記述:Facial expression recognition benefits from deep learning models because of their ability to automatically extract features. However, these models face three important challenges: first, training tends to take longer times than with traditional machine learning models. Second, obtaining and labeling enough data samples can become a heavy burden due to the feature complexity usually involved in these problems. Third, it is also common to face class imbalance challenges. In this paper, we address these challenges by implementing transfer learning, oversampling and fine tuning to a facial expression recognition use case. Combining transfer learning with the use of a GPU helped us complete the training for our models in just about one hour. Furthermore, we achieved a 65.75% accuracy with one of the models. We provide measurements for metrics that are helpful when dealing with imbalanced data to assess that the models are not biased like precision, recall, F1 score and loss.
国:Kérwá
機関:Universidad de Costa Rica
Repositorio:Kérwá
言語:Inglés
OAI Identifier:oai:kerwa.ucr.ac.cr:10669/101862
オンライン・アクセス:https://hdl.handle.net/10669/101862
https://doi.org/10.1109/CLEI64178.2024.10700478
キーワード:transfer learning
facial expression recognition
fine-tuning
oversampling