XBOUND: Exploring Capability Boundaries of Device-Control Agents at the State Level
Shaoqing Zhang, Kehai Chen, Zhuosheng Zhang, Rumei Li, Rongxiang Weng, Yang Xiang, Min Zhang
Published: 2025/5/27
Abstract
Recent advancements in vision-language models have increased interest in Device-Control Agents (DC agents) for managing graphical user interfaces (GUIs). With the growing complexity and integration of such agents into various applications, effective evaluation methods have become crucial. The current evaluation method for DC agents primarily focuses on the instruction level, providing the current state (e.g., screenshots) and past execution history to determine actions for target instructions, helping identify potential execution failures. However, in GUI environments, a single state may contain multiple interactive widgets, each linked to different instructions, presenting an opportunity for diverse actions based on various instruction targets. Evaluating the agent's performance solely at the instruction level may overlook the broader context of these interactions. To capture a more comprehensive view of agent performance, we propose a new evaluation method, XBOUND, to evaluate the accuracy of instruction completion on a per-state basis. XBOUND provides a state-level evaluation framework, serving as a tool to assess agents' capabilities within environmental states. Our evaluation yields several key insights: UI-TARS stands out as the strongest 7B model, current agents display a bimodal performance pattern in instruction unification, and sub-7B models remain limited in state mastery. We further identify GPT-based planning as a critical bottleneck, and show that grounding data mainly benefits action matching, while trajectory data is more effective for instruction unification.