Zum Hauptinhalt springen

Our paper “GUISpector: An MLLM Agent Framework for Automated Verification of Natural Language Requirements in GUI Prototypes” has been accepted at ICSE 20

The paper “GUISpector: An MLLM Agent Framework for Automated Verification of Natural Language Requirements in GUI Prototypes” by Kristian Kolthoff, Felix Kretzer, Simone Paolo Ponzetto, Alexander Maedche, and Christian Bartelt has been accepted at ICSE 2026, the 48th IEEE/ACM International Conference on Software Engineering (CORE Rank A*). ICSE is the premier international forum for software engineering research, bringing together researchers and practitioners from academia and industry to discuss fundamental advances in the design, development, analysis, and evolution of software systems. ICSE 2026 will take place in Rio de Janeiro, Brazil, from April 12–18, 2026.

Graphical user interfaces (GUIs) are foundational to interactive systems and play a pivotal role in early requirements elicitation through prototyping. Ensuring that GUI implementations fulfil natural language (NL) requirements is essential for robust software engineering, especially as LLM-driven programming agents become increasingly integrated into development workflows. Existing GUI testing approaches, whether traditional or LLM-driven, often fall short in handling the complexity of modern interfaces, and typically lack actionable feedback and effective integration with automated development agents. In this paper, we introduce GUISpector, a novel framework that leverages a multi-modal (M)LLM-based agent for the automated verification of NL requirements in GUI prototypes. First, GUISpector adapts a MLLM agent to interpret and operationalize NL requirements, enabling to autonomously plan and execute verification trajectories across GUI applications. Second, GUISpector systematically extracts detailed NL feedback from the agent’s verification process, providing developers with actionable insights that can be used to iteratively refine the GUI artifact or directly inform LLM-based code generation in a closed feedback loop. Third, we present an integrated tool that unifies these capabilities, offering practitioners an accessible interface for supervising verification runs, inspecting agent rationales and managing the end-to-end requirements verification process. We evaluated GUISpector on a comprehensive set of 150 requirements based on 900 acceptance criteria annotations across diverse GUI applications, demonstrating effective detection of requirement satisfaction and violations and highlighting its potential for seamless integration of actionable feedback into automated LLM-driven development workflows. The video presentation of GUISpector is available at: youtu.be/JByYF6BNQeE, showcasing its main capabilities.

The full paper can be read at https://arxiv.org/pdf/2510.04791