AutoEval: A Practical Framework for Autonomous Evaluation of Mobile Agents

Jiahui Sun, Zhichao Hua, Yubin Xia

公開日: 2025/3/4

Abstract

Comprehensive evaluation of mobile agents can significantly advance their development and real-world applicability. However, existing benchmarks lack practicality and scalability due to the extensive manual effort in defining task reward signals and implementing evaluation codes. We propose AutoEval, an evaluation framework which tests mobile agents without any manual effort. Our approach designs a UI state change representation which can be used to automatically generate task reward signals, and employs a Judge System for autonomous evaluation. Evaluation shows AutoEval can automatically generate reward signals with high correlation to human-annotated signals, and achieve high accuracy (up to 94%) in autonomous evaluation comparable to human evaluation. Finally, we evaluate state-of-the-art mobile agents using our framework, providing insights into their performance and limitations.

AutoEval: A Practical Framework for Autonomous Evaluation of Mobile Agents | SummarXiv | SummarXiv