Download PDFOpen PDF in browserLLM for Explainable AIEasyChair Preprint 151895 pages•Date: October 3, 2024AbstractExplainability for large language models (LLMs) is a critical area in natural language processing because it helps users understand the models and makes it easier to analyze errors, especially as these models are widely used in various applications. The "black-box" nature of AI models raises challenges in transparency and ethics because we cannot see or understand how the model processes information to generate its output. Traditional methods, such as attention mechanisms, have enhanced explainability in AI models by improving model focus and accuracy but at the cost of increased complexity. More specifically, they use tools like gradient-based methods (e.g., Grad-CAM), making them less accessible to non-expert users. We employ in-context learning and prompt refinement techniques, focusing on the pre-trained Transformer-based large language model BART. This approach simplifies model interaction by allowing users to guide the model through natural language prompts, reducing the need for technical expertise. We validate this method via a real-life StudentLife dataset collected 48 college students over 10 weeks. Our results offer the possibility of using LLMs for XAI to achieve data mining for everyone. Keyphrases: Data Mining, Explainability, Explainable AI (XAI), In-Context Learning, Large Language Models (LLMs), Prompt Engineering, feature importance, language model, refined specific prompt
|