Benchmarking Offline Reinforcement Learning for Emotion-Adaptive Social Robotics

Soon Jynn Chu, Raju Gottumukkala, Alan Barhorst

Published: 2025/9/21

Abstract

The ability of social robots to respond to human emotions is crucial for building trust and acceptance in human-robot collaborative environments. However, developing such capabilities through online reinforcement learning is sometimes impractical due to the prohibitive cost of data collection and the risk of generating unsafe behaviors. In this paper, we study the use of offline reinforcement learning as a practical and efficient alternative. This technique uses pre-collected data to enable emotion-adaptive social robots. We present a system architecture that integrates multimodal sensing and recognition, decision-making, and adaptive responses. Using a limited dataset from a human-robot game-playing scenario, we establish a benchmark for comparing offline reinforcement learning algorithms that do not require an online environment. Our results show that BCQ and CQL are more robust to data sparsity, achieving higher state-action values compared to NFQ, DQN, and DDQN. This work establishes a foundation for benchmarking offline RL in emotion-adaptive robotics and informs future deployment in real-world HRI. Our findings provide empirical insight into the performance of offline reinforcement learning algorithms in data-constrained HRI. This work establishes a foundation for benchmarking offline RL in emotion-adaptive robotics and informs its future deployment in real-world HRI, such as in conversational agents, educational partners, and personal assistants, require reliable emotional responsiveness.