Large Language Models for Mobile GUI Text Input Generation: An Empirical Study
Chenhui Cui, Tao Li, Junjie Wang, Chunyang Chen, Dave Towey, Rubing Huang
Published: 2024/4/13
Abstract
Mobile applications have become an essential part of our daily lives, making ensuring their quality an important activity. Graphical User Interface (GUI) testing is a quality assurance method that has frequently been used for mobile apps. Some GUIs require these text inputs to be able to move from one page to the next. Recently, Large Language Models (LLMs) have demonstrated excellent text-generation capabilities. To the best of our knowledge, there has not yet been any empirical study to evaluate different pre-trained LLMs' effectiveness at generating text inputs for mobile GUI testing. This paper reports on a large-scale empirical study that extensively investigates the effectiveness of eight state-of-the-art LLMs in Android text-input generation for UI pages. We collected 115 Android apps from Google Play and extracted contextual information from the UI pages to construct prompts for LLMs. The experimental results show that some LLMs can generate more effective and higher-quality text inputs. We conducted an experiment to assess the bug-detection capabilities of LLMs by directly generating invalid text inputs. We also invited professional testers to manually evaluate, modify, and re-create the LLM-generated text inputs. We integrated the text-input generation process into DroidBot to augment its UI-exploration capabilities. Finally, we present several valuable insights regarding the application of LLMs to Android testing, particularly for the generation of text inputs: These insights will benefit the Android testing community.