One paper accepted at XAI 2025

CORE research group has one paper accepted at XAI 2025.

The paper “Which LIME should I trust? Concepts, Challenges, and Solutions” by Patrick Knab, Sascha Marton, Udo Schlegel, and Christian Bartelt has been accepted for presentation at XAI 2025—the 3rd World Conference on eXplainable Artificial Intelligence. Recognized as a top-tier conference in the field of explainability, XAI 2025 will take place in Istanbul, Turkey, from July 9 to July 11, 2025. This prestigious event continues its tradition of excellence by gathering leading experts and practitioners dedicated to advancing AI transparency and accountability.

Abstract: As neural networks become dominant in essential systems, Explainable Artificial Intelligence (XAI) plays a crucial role in fostering trust and detecting potential misbehavior of opaque models. LIME (Local Interpretable Model-agnostic Explanations) is among the most prominent model-agnostic approaches, generating explanations by approximating the behavior of black-box models around specific instances. Despite its popularity, LIME faces challenges related to fidelity, stability, and applicability to domain-specific problems. Numerous adaptations and enhancements have been proposed to address these issues, but the growing number of improvements can be overwhelming, complicating efforts to navigate LIME-related research. To the best of our knowledge, this is the first survey to comprehensively explore LIME’s foundational concepts and known limitations. We categorize and compare its various improvements, offering a structured taxonomy based on intermediate steps and key issues. Our analysis provides a holistic overview of advancements in LIME, guiding future research and helping practitioners identify suitable approaches. Additionally, we provide a continuously updated interactive website, (https://patrick-knab.github.io/which-lime-to-trust/), offering a concise and accessible overview of the survey.

The full paper can be read at https://arxiv.org/abs/2503.24365v1