DeepEmoNet: Building Machine Learning Models for Automatic Emotion Recognition in Human Speeches

Tai Vu

公開日: 2025/8/20

Abstract

Speech emotion recognition (SER) has been a challenging problem in spoken language processing research, because it is unclear how human emotions are connected to various components of sounds such as pitch, loudness, and energy. This paper aims to tackle this problem using machine learning. Particularly, we built several machine learning models using SVMs, LTSMs, and CNNs to classify emotions in human speeches. In addition, by leveraging transfer learning and data augmentation, we efficiently trained our models to attain decent performances on a relatively small dataset. Our best model was a ResNet34 network, which achieved an accuracy of $66.7\%$ and an F1 score of $0.631$.

DeepEmoNet: Building Machine Learning Models for Automatic Emotion Recognition in Human Speeches | SummarXiv | SummarXiv