Download PDFOpen PDF in browserIrrelevant Explanations: a Logical Formalization and a Case StudyEasyChair Preprint 1314110 pages•Date: April 30, 2024AbstractExplaining the behavior of AI-based tools, whose results may be unexpected even to experts, has become a major request from society and a major concern of AI practitioners and theoreticians. In this position paper we raise two points: (1) \emph{irrelevance} is more amenable to a logical formalization than relevance; (2) since effective explanations must take into account both the context and the receiver of the explanations (called the explainee) so it should be also for the definition of irrelevance. We propose a general, logical framework characterizing context-aware and receiver-aware irrelevance, and provide a case study on an existing tool, based on Semantic Web, that prunes irrelevant parts of an explanation. Keyphrases: Semantic Web, XAI, logic
|