The paper Reliability of Large Language Models for Design Synthesis: An Empirical Study of Variance, Prompt Sensitivity, and Method Scaffolding” has been accepted at ICSA 2026
The paper “Reliability of Large Language Models for Design Synthesis: An Empirical Study of Variance, Prompt Sensitivity, and Method Scaffolding" by Rabia Iftikhar and Andreas Rausch has been accepted at ICSA 2026, the 23rd IEEE International Conference on Software Architecture.
The International Conference on Software Architecture (ICSA) is the premier venue for practitioners and researchers interested in software architecture, in component-based software engineering and in quality aspects of software and how these relate to the design of software architectures.
ICSA has a strong tradition as a working conference (previously named Working International Conference on Software Architecture, WICSA), where researchers meet practitioners and software architects can explain the problems they face in their day-to-day work and try to influence the future of the field.
ICSA 2026 is scheduled to be held at the Vrije Universiteit Amsterdam, Netherlands, between the 22nd and the 26th of June 2026. Check out more about the history and past series of ICSA at https://icsa-conferences.org/series/history
Large Language Models (LLMs) are increasingly applied to automate software engineering tasks, including the generation of UML class diagrams from natural language descriptions. While prior work demonstrates that LLMs can produce syntactically valid diagrams, syntactic correctness alone does not guarantee meaningful design. This study investigates whether LLMs can move beyond diagram translation to perform design synthesis, and how reliably they maintain design-oriented reasoning under variation. We introduce a preference-based few-shot prompting approach that biases LLM outputs toward designs satisfying object-oriented principles and pattern-consistent structures. Two design-intent benchmarks, each with three domain-only, paraphrased prompts and 10 repeated runs, are used to evaluate three LLMs (ChatGPT 4o-mini, Claude 3.5 Sonnet, Gemini 2.5 Flash) across three modeling strategies: standard prompting, rule-injection prompting, and preference-based prompting, totaling 540 experiments (i.e. 2x3x10x3x3). Results indicate that while preference-based alignment improves adherence to design intent it does not eliminate non-determinism, and model-level behavior strongly influences design reliability. These findings highlight that achieving dependable LLM-assisted software design requires not only effective prompting but also careful consideration of model behavior and robustness.
The full paper can be read at https://arxiv.org/abs/2604.00851