A multimodal stress detection dataset with facial expressions and physiological signals

Majid Hosseini, Fahad Sohrab, Raju Gottumukkala, Ravi Teja Bhupatiraju, Satya Katragadda, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

公開日: 2022/8/29

Abstract

Affective computing has garnered the attention and interest of researchers in recent years, as there is a need for AI systems to better understand and react to human emotions. However, analyzing human emotions, such as mood or stress, is quite complex. While various stress studies use facial expressions and wearables, most existing datasets rely on processing data from a single modality. This paper presents EmpathicSchool, a novel dataset that captures facial expressions and the associated physiological signals, such as heart rate, electrodermal activity, and skin temperature, under different stress levels. The data was collected from 30 participants during different sessions for about ninety minutes each (for a total of 40 hours). The data includes seven different signal types, including both computer vision and physiological features that can be used to detect stress. In addition, various experiments were conducted to validate the signal quality.

A multimodal stress detection dataset with facial expressions and physiological signals | SummarXiv | SummarXiv