GestOS: Advanced Hand Gesture Interpretation via Large Language Models to control Any Type of Robot

Artem Lykov, Oleg Kobzarev, Dzmitry Tsetserukou

公開日: 2025/9/17

Abstract

We present GestOS, a gesture-based operating system for high-level control of heterogeneous robot teams. Unlike prior systems that map gestures to fixed commands or single-agent actions, GestOS interprets hand gestures semantically and dynamically distributes tasks across multiple robots based on their capabilities, current state, and supported instruction sets. The system combines lightweight visual perception with large language model (LLM) reasoning: hand poses are converted into structured textual descriptions, which the LLM uses to infer intent and generate robot-specific commands. A robot selection module ensures that each gesture-triggered task is matched to the most suitable agent in real time. This architecture enables context-aware, adaptive control without requiring explicit user specification of targets or commands. By advancing gesture interaction from recognition to intelligent orchestration, GestOS supports scalable, flexible, and user-friendly collaboration with robotic systems in dynamic environments.

GestOS: Advanced Hand Gesture Interpretation via Large Language Models to control Any Type of Robot | SummarXiv | SummarXiv