US-20260127484-A1 - PERSONALIZED EXPLAINABILITY USING SHAP AND LLMS
Abstract
A system and method for generating personalized explanations for AI model predictions. Operations may involve using an AI model to make inferences, determining the importance of different factors used in those inferences, and creating customized explanation prompts. Explainable AI techniques may be used to generate these prompts, taking into account the specific features involved in the inference as well as relevant information about the intended audience. A natural language processing component may use these prompts to produce explanations tailored to the target audience, making the AI's decision-making process more understandable. Feedback from users about these explanations may be obtained and used to refine and improve its explanation generation process over time.
Inventors
- Natalie Bar Eliyahu
- Shon MENDELSON
- Hadas Baumer
- Lior TABORI
Assignees
- INTUIT INC.
Dates
- Publication Date
- 20260507
- Application Date
- 20241101
Claims (20)
- 1 . A system for generating personalized explanations for artificial intelligence (AI) model predictions, comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the system to: perform an inference with an inference model; extract Shapley (SHAP) values of corresponding features used by the inference model for the inference; generate, by an explainable AI (XAI) module, a personalized explanation prompt based on the corresponding features and contextual information of a target audience; and produce, by a large language model (LLM), a tailored explanation of the inference to the target audience based on the generated prompt.
- 2 . The system of claim 1 , wherein the system is further configured to extract the SHAP values by: analyzing the inference model's inference; calculating the SHAP values for the corresponding features used in the inference model; ranking the corresponding features based on their absolute SHAP values; and selecting a predetermined number of top positive and negative SHAP values.
- 3 . The system of claim 2 , wherein the system is further configured to generate the personalized explanation prompt by: retrieving the selected SHAP values and the corresponding features; gathering additional user context; incorporating domain-specific information; and integrating the corresponding features, user context, and the domain-specific information into the prompt.
- 4 . The system of claim 1 , wherein the system is further configured to fine-tune the XAI by: analyzing the collected user feedback to identify low-performing explanations; gathering additional topic and audience data related to the low-performing explanations; and updating the XAI using the additional data to improve explanation generation.
- 5 . The system of claim 4 , wherein the system is further configured to analyze the collected user feedback by evaluating user interaction metrics with the tailored explanations.
- 6 . The system of claim 1 , wherein the tailored explanation comprises a natural language description of corresponding features and their impact on the inference model's inference.
- 7 . The system of claim 1 , wherein the system is further configured to adjust a level of detail in the tailored explanation based on a technical expertise level of the target audience.
- 8 . The system of claim 1 , wherein the system is further configured to generate multiple explanations with varying levels of complexity and present them to the user for selection.
- 9 . The system of claim 1 , wherein the system is further configured to collect user feedback on the tailored explanation and fine tune the XAI based on the collected feedback.
- 10 . The system of claim 1 , wherein the system is further configured to use the feedback for fine-tuning by: tracking user engagement with different aspects of the explanations; identifying patterns in user preferences across different audience segments; and adjusting a prompt generation strategy of the XAI based on the identified patterns.
- 11 . A method of generating personalized explanations for artificial intelligence (AI) model predictions, comprising: performing inference with an inference model; extracting Shapley (SHAP) values of corresponding features used by the inference model for the inference; generating, by an explainable AI (XAI) module, a personalized explanation prompt based on the corresponding features and contextual information of a target audience; and producing, by a large language model (LLM), a tailored explanation of the inference to the target audience based on the generated prompt.
- 12 . The method of claim 11 , wherein extracting the SHAP values comprises: analyzing the inference model's inference; calculating the SHAP values for the corresponding features used in the inference model; ranking the corresponding features based on their absolute SHAP values; and selecting a predetermined number of top positive and negative SHAP values.
- 13 . The method of claim 12 , wherein generating the personalized explanation prompt comprises: retrieving the selected SHAP values and the corresponding features; gathering additional user context; incorporating domain-specific information; and integrating the corresponding features, user context, and the domain-specific information into the prompt.
- 14 . The method of claim 11 , wherein fine-tuning the XAI comprises: analyzing the collected user feedback to identify low-performing explanations; gathering additional topic and audience data related to the low-performing explanations; and updating the XAI using the additional data to improve explanation generation.
- 15 . The method of claim 14 , wherein analyzing the collected user feedback comprises evaluating user interaction metrics with the tailored explanations.
- 16 . The method of claim 11 , wherein the tailored explanation comprises a natural language description of the corresponding features and their impact on the inference model's inference.
- 17 . The method of claim 11 , further comprising adjusting a level of detail in the tailored explanation based on a technical expertise level of the target audience.
- 18 . The method of claim 11 , further comprising generating multiple explanations with varying levels of complexity and presenting them to the user for selection.
- 19 . The method of claim 11 , further comprising collecting user feedback on the tailored explanation and fine tuning the XAI based on the collected feedback.
- 20 . The method of claim 11 , wherein the feedback for fine-tuning comprises: tracking user engagement with different aspects of the explanations; identifying patterns in user preferences across different audience segments; and adjusting a prompt generation strategy of the XAI based on the identified patterns.
Description
BACKGROUND Explainable Artificial Intelligence (XAI) has emerged as a field in the development and deployment of AI systems across various domains. As AI models become increasingly complex and are applied to high-stakes decision-making processes, there is a growing need for transparency and interpretability in their outputs. Current XAI approaches often focus on providing feature importance scores or model-agnostic explanations and attention mechanisms in neural networks. These methods aim to shed light on the inner workings of AI models and help users understand the factors influencing AI-driven decisions. However, existing XAI techniques face several limitations that hinder their effectiveness in real-world applications. Many of these methods generate technical explanations that are not easily understood by non-expert users, creating a gap between the AI system and its intended audience. Additionally, current approaches often fail to account for the diverse backgrounds, roles, and needs of different stakeholders interacting with AI systems. This one-size-fits-all approach to explanations can lead to confusion, mistrust, and/or misinterpretation of AI outputs, which is undesirable. Furthermore, the lack of personalization in XAI methods may result in explanations that are either too simplistic or overly complex for specific users, reducing their practical value in decision-making processes, which is also undesirable. SUMMARY Embodiments disclosed herein solve the aforementioned technical problems and may provide other technical solutions as well. Contrary to conventional techniques, the disclosed solution includes a novel method and system for generating personalized explanations for AI model predictions. For example, the disclosed operations may involve using an AI model to make inferences, determining the importance of different factors used in those inferences, and creating customized explanation prompts. Explainable AI techniques may be used to generate these prompts, taking into account the specific features involved in the inferences as well as relevant information about the intended audience. A natural language processing component may use these prompts to produce explanations tailored to the target audience, making the AI's decision-making process more understandable. An example embodiment includes a system for generating personalized explanations for AI model predictions, comprising a processor and a memory storing instructions that, when executed by the processor, cause the system to perform an inference with an inference model, extract Shapley (SHAP) values of corresponding features used by the inference model for the inference, generate, by an explainable AI (XAI), a personalized explanation prompt based on the corresponding features and contextual information of a target audience, and produce, by a large language model (LLM), a tailored explanation of the inference to the target audience based on the generated prompt. One or more embodiments may collect user feedback on the tailored explanation and fine-tune the XAI based on the collected feedback. Another example embodiment includes a method of generating personalized explanations for AI model predictions, comprising performing an inference with an inference model, extracting Shapley (SHAP) values of corresponding features used by the inference model for the inference, generating, by an explainable AI (XAI), a personalized explanation prompt based on the corresponding features and contextual information of a target audience, and producing, by a large language model (LLM), a tailored explanation of the inference to the target audience based on the generated prompt. The method may also collect user feedback on the tailored explanation and fine-tune the XAI based on the collected feedback. BRIEF DESCRIPTION OF THE DRAWINGS So that the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be made by reference to example embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only example embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may apply to other equally effective example embodiments. FIG. 1 illustrates a personalized explainable AI system, according to aspects of the present disclosure. FIG. 2 depicts a block diagram of a system for generating personalized explanations, in accordance with example embodiments. FIG. 3 shows a flowchart of a method for generating personalized explanations for AI model predictions, according to an embodiment. FIG. 4 illustrates a flowchart of a method for extracting SHAP values and identifying influential features, according to aspects of the present disclosure. FIG. 5 depicts a flowchart of a method for generating a personalized explanation prompt, in accordance with example embodiments. FIG. 6 s