Filters

Papers

Paper AI Model Task Application Domain Type of Users Explanation Modality XAI Techniques Type of Study Metrics
“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design Neural Network Speech Recognition Natural Language Processing Generic Audio, Text, Visual Features Importance, Other Controlled experiment Helpfulness, Satistfaction, Trust, Understandability, Visual appeal
“That's (not) the output I expected!”On the role of end user expectations in creating explanations of AI systems Probabilistic Text Classification Media and Communication Generic Text, Visual Counter-exemplars/factual, Features Importance Controlled experiment Satistfaction, Understandability, Usefulness
A Big Bang-Big Crunch Type-2 Fuzzy Logic System for Explainable Predictive Maintenance Fuzzy Estimation Other Domain-experts Text, Visual Decision Rules
A Clinical Decision Support System for Sleep Staging Tasks with Explanations from Artificial Intelligence: User-Centered Design and Evaluation Study Neural Network Multi-class Classification Health Domain-experts Visual Neurons Activation, Salient Mask Interview, User observation Helpfulness, Intuitiveness
A process framework for inducing and explaining Datalog theories White Box Image Classification Artificial Intelligence and Robotics Systems, Health Natural Language, Text, Visual Decision Rules
A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare Ensemble Binary classification Health Domain-experts Visual Shapley Values Focus Group
A User Interface for Explaining Machine Learning Model Explanations Neural Network Image Classification General AI experts Visual Salient Mask
A Visual Analytics Framework for Contrastive Network Analysis Neural Network Embedding Network AI experts Visual Counter-exemplars/factual, Features Importance Controlled experiment Learnability, Task performance, Usability, Workload
A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations Neural Network Binary classification General, Health AI experts, Domain-experts Visual Counter-exemplars/factual, Decision Rules, Features Importance
AI-Driven User Interface Design for Solving a Rubik’s Cube: A Scaffolding Design Perspective Unknown Multi-class Classification Media and Communication Domain-experts, Non-experts Natural Language, Text, Visual None Usability study Usability
AIMEE: An Exploratory Study of How Rules Support AI Developers to Explain and Edit Models Rule-Based Binary classification Artificial Intelligence and Robotics Systems AI experts Text, Visual Decision Rules Controlled experiment, Interview, User observation Ease of use, Satistfaction, Task performance, Usefulness, Workload
ALEEDSA: Augmented Reality for Interactive Machine Learning Regression Finance/Economics Domain-experts Visual Counter-exemplars/factual Interactive feedback session Usefulness
Algorithmic and HCI Aspects for Explaining Recommendations of Artistic Images Math, Neural Network Embedding, Recommendation Finance/Economics, Media and Communication Generic Text, Visual Exemplars, Features Importance Controlled experiment Recommendation Efficacy, Workload
AlphaDAPR: An AI-Based Explainable Expert Support System for Art Therapy Neural Network Image Classification, Object detection Artificial Intelligence and Robotics Systems, Health, Media and Communication Domain-experts Text, Visual Exemplars Interview, Usability study Ease of use, Explainability, Familiarity, Satistfaction, Trust, Understandability, Willing to reuse
An Empirical Study of Reward Explanations With Human-Robot Interaction Applications Artificial Intelligence and Robotics Systems Generic Text, Visual Exemplars, Features Importance Controlled experiment Perceived quality, Predictability, Workload
An explanation space to align user studies with the technical development of Explainable AI None None
An Explanation User Interface for a Knowledge Graph-Based XAI Approach to Process Analysis Unknown Recommendation Finance/Economics Domain-experts Text Exemplars, Features Importance Interview UX, Usability
Analyzing Description, User Understanding and Expectations of AI in Mobile Health Applications Health None
Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making Ensemble Regression Health Domain-experts Text, Visual Shapley Values Controlled experiment Ease of use, Satistfaction, Trust, Understandability
Building Trust in Interactive Machine Learning via User Contributed Interpretable Rules White Box Binary classification Media and Communication Generic Text, Visual Decision Rules Controlled experiment Ease of use, Perceived control, Perceived quality, Satistfaction, Trust, Understandability
C-XAI: A conceptual framework for designing XAI tools that support trust calibration
ChatrEx: Designing Explainable Chatbot Interfaces for Enhancing Usefulness, Transparency, and Trust Unknown Multi-class Classification Natural Language Processing Generic Visual Exemplars Controlled experiment, Usability study Explainability, Transparency, Trust, Usability, Usefulness
CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization Neural Network Image Classification Education, Health Non-experts Text, Visual None Interview, Survey, User observation Usability, Usefulness
Co-Design and Evaluation of an Intelligent Decision Support System for Stroke Rehabilitation Assessment Neural Network, White Box Binary classification Health Domain-experts Text, Video, Visual Features Importance Controlled experiment Trust, Usefulness, Willing to reuse, Workload
Co-Design of Human-Centered, Explainable AI for Clinical Decision Support Unknown Binary classification Health Domain-experts Text, Visual Decision Rules Survey Trust
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective Neural Network Image Classification General Non-experts Text, Visual Exemplars Controlled experiment, User observation Usability, Usefulness
Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users Math Regression Finance/Economics Non-experts Text, Visual Exemplars, Shapley Values Controlled experiment Satistfaction, Understandability
Contextualizing the Why: The Potential of Using Visual Map As a Novel XAI Method for Users with Low AI-literacy Unknown Binary classification None AI experts, Non-experts Visual Counter-exemplars/factual, Decision Tree, Shapley Values Controlled experiment Cognitive Load, Explainability, Satistfaction
ConvXAI : Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing Neural Network Recommendation Education Non-experts Natural Language, Text Counter-exemplars/factual, Exemplars, Salient Mask Controlled experiment Cognitive Load, Perceived control, Perceived efficiency, Understandability
ConvXAI: a System for Multimodal Interaction with Any Black-box Explainer Ensemble, Probabilistic, White Box Binary classification, Multi-class Classification, Regression General AI experts, Domain-experts, Non-experts Natural Language, Text, Visual Counter-exemplars/factual, Features Importance, Shapley Values Controlled experiment Explainability, Perceived quality, Usefulness
DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models Agnostic Agnostic General AI experts, Non-experts Visual Counter-exemplars/factual Interview Usability
DeepSeer: Interactive RNN Explanation and Debugging via State Abstraction Neural Network Text Classification Natural Language Processing Domain-experts Text, Visual Neurons Activation Controlled experiment Task performance, Workload
DeepVix: Explaining Long Short-Term Memory Network with High Dimensional Time Series Data Neural Network Agnostic General Visual Neurons Activation
Design and Evaluation of Trustworthy Knowledge Tracing Model for Intelligent Tutoring System Neural Network Estimation Education Domain-experts, Non-experts Text, Visual Shapley Values Controlled experiment, Interview Trust, Understandability
Designing an XAI interface for BCI experts: A contextual design for pragmatic explanation interface based on domain knowledge in a specific context Neural Network Image Classification Health Domain-experts Text, Visual Shapley Values Controlled experiment, Interview, User observation Explanation quality, Usability
Designing and Evaluating Explanations for a Predictive Health Dashboard: A User-Centred Case Study Unknown Recommendation Health Domain-experts Text, Visual Counter-exemplars/factual, Features Importance Focus Group Usability
Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions Neural Network Image Classification General AI experts Visual Salient Mask Controlled experiment Confidence, Trust, Workload
Designing Theory-Driven User-Centric Explainable AI Ensemble Multi-class Classification Health Domain-experts Text, Visual Decision Rules, Sensitivity Analysis, Shapley Values Co-design
DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification Neural Network, Probabilistic Object detection Artificial Intelligence and Robotics Systems Domain-experts Visual User observation Helpfulness, Perceived quality
Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations White Box Regression Health Domain-experts, Non-experts Text, Visual Counter-exemplars/factual, Shapley Values Interview, Usability study Actionability, Trust, Understandability, Usefulness
DMT-EV: An Explainable Deep Network for Dimension Reduction Neural Network Dimensionality Reduction General Domain-experts Visual Salient Mask Interview, Usability study
Does This Explanation Help? Designing Local Model-Agnostic Explanation Representations and an Experimental Evaluation Using Eye-Tracking Technology Neural Network Estimation Finance/Economics AI experts, Non-experts Text, Visual Decision Rules, Features Importance, Shapley Values Co-design, Controlled experiment Forward-prediction score, Satistfaction, Trust, Understandability, Usefulness
Editable machine learning models? A rule-based framework for user studies of explainability White Box Multi-class Classification Artificial Intelligence and Robotics Systems Text Decision Rules
Effects of Explanations in AI-Assisted Decision Making: Principles and Comparisons None None
Effects of Interactivity and Presentation on Review-Based Explanations for Recommendations Neural Network Text Classification Natural Language Processing Non-experts Text, Visual Features Importance Controlled experiment Explanation quality, Persuasiveness, Transparency, Trust
Elements that Influence Transparency in Artificial Intelligent Systems - A Survey
ELVIRA: An explainable agent for value and utility-driven multiuser privacy Other Generic Counter-exemplars/factual Controlled experiment Recommendation Efficacy, Satistfaction, Value Adherence (measured with PVQ)
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study Neural Network Image Classification General Generic Text, Visual Salient Mask Controlled experiment Forward-prediction score
Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling Neural Network Image Classification Health Visual Decision Rules
EXMOS: Explanatory Model Steering through Multifaceted Explanations and Data Configurations Ensemble Binary classification Health Domain-experts Visual Decision Rules, Features Importance, Shapley Values Controlled experiment, Interview Cognitive Load, Model effectiveness, Trust, Understandability
ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving None Image Classification Mobility Non-experts Audio None Controlled experiment Perceived Safety, Perceived control, Technology acceptance, UX
Explainability via Interactivity? Supporting Nonexperts’Sensemaking of Pre-Trained CNN by Interacting with Their Daily Surroundings Neural Network Image Classification Education Non-experts Visual Salient Mask Interview
Explainable AI for Non-Experts: Energy Tariff Forecasting Ensemble Regression Finance/Economics Non-experts Visual Counter-exemplars/factual, Shapley Values
Explainable AI-Based Interface System for Weather Forecasting Model Neural Network Multi-class Classification Agriculture and Nature Domain-experts Text, Visual Features Importance, Salient Mask Focus Group, Interview Trust, Understandability, Usefulness
Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach White Box Multi-class Classification Health Domain-experts Visual Counter-exemplars/factual, Exemplars Usability study Confidence
Explainable Artificial Intelligence for Cytological Image Analysis Neural Network Multi-class Classification Health AI experts, Domain-experts Visual Features Importance Controlled experiment Bias Detection, Trust, Understandability
Explainable artificial intelligence for education and training
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities None None
Explainable Visualization for Interactive Exploration of CNN on Wikipedia Vandal Detection Neural Network Binary classification Other Visual Other No study
ExplainExplore: Visual Exploration of Machine Learning Explanations Agnostic Multi-class Classification General AI experts Visual Features Importance Usability study
Explaining Artificial Intelligence with Tailored Interactive Visualisations Education, Health, Other None
Explaining Recommendations through Conversations: Dialog Model and the Effects of Interface Type and Degree of Interactivity Math, Neural Network Recommendation, Sentiment Analysis Natural Language Processing Generic Natural Language Other Wizard-of-Oz Confidence, Perceived effectiveness, Transparency
Explaining User Models with Different Levels of Detail for Transparent Recommendation: A User Study Math Recommendation Recommendation Generic Text, Visual None Controlled experiment Familiarity, Level of domain knowledge, Need for cognition, Perceived effectiveness, Perceived efficiency, Personal innovativeness, Persuasiveness, Satistfaction, Scrutability, Task performance, Technical expertise, Transparency, Trust, Trust propensity (of user)
Exploring the Impact of Explainability on Trust and Acceptance of Conversational Agents ‚ A Wizard of Oz Study Neural Network, Rule-Based Recommendation Natural Language Processing Generic Natural Language Exemplars Wizard-of-Oz Ease of use, Technology acceptance, Trust, Understandability, Usefulness
Exploring the Usability and Trustworthiness of AI-Driven User Interfaces for Neurological Diagnosis Unknown Image Classification Health Domain-experts Visual Counter-exemplars/factual, Shapley Values Survey Perceived effectiveness, Satistfaction, Trust, Understandability
Finding AI’s Faults with AAR/AI: An Empirical Study Reinforcement Learning Regression Media and Communication Domain-experts Text, Visual Decision Tree Controlled experiment Task performance, Workload
From explainable to interactive AI: A literature review on current trends in human-AI interaction
From Philosophy to Interfaces: An Explanatory Method and a Tool Inspired by Achinstein’s Theory of Explanation Neural Network Binary classification Finance/Economics Generic Text Counter-exemplars/factual Controlled experiment Perceived effectiveness, Perceived efficiency, Satistfaction
Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models White Box Binary classification, Regression Finance/Economics, General, Health AI experts Text, Visual Counter-exemplars/factual, Exemplars, Features Importance Usability study Usability
Generating and Visualizing Trace Link Explanations Artificial Intelligence and Robotics Systems Domain-experts Text, Visual Exemplars Co-design, Controlled experiment Explanation helpfulness, Helpfulness
Giving DIAnA More TIME –Guidance for the Design of XAI-Based Medical Decision Support Systems None None
GUI Design Patterns for Improving the HCI in Explainable Artificial Intelligence None None
Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction" Unknown Image Classification Other AI experts, Domain-experts Visual None Interactive feedback session, Interview, Survey Needs for explainability (XAI Question Bank)
How do visual explanations foster end users' appropriate trust in machine learning? Other Black-Box Image Classification Other AI experts, Non-experts Visual Exemplars Controlled experiment
How the different explanation classes impact trust calibration: The case of clinical decision support systems None Binary classification Health Domain-experts Text, Visual Counter-exemplars/factual, Exemplars, Features Importance Controlled experiment, Interview Trust
How to explain AI systems to end users: a systematic literature review and research agenda None None
Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces None None
I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI Ensemble Regression Finance/Economics Non-experts Text, Visual Shapley Values User observation Perceived understanding
If it is easy to understand, then it will have value‚ : Examining Perceptions of Explainable AI with Community Health Workers in Rural India Unknown Binary classification Health Domain-experts Visual Features Importance, Shapley Values Interview, User observation Perceived quality, Understandability
Impact of example-based XAI for neural networks on trust, understanding, and performance Neural Network Image Classification Health AI experts Visual Exemplars Controlled experiment Explainability, Fairness rating, Trust, Understandability
Initial results on personalizing explanations of AI hints in an ITS Unknown Constraint's satisfaction problem Education Domain-experts Text, Visual None Controlled experiment Intrusiveness, Trust, Understandability, Usefulness
Interactive Explainable Case-Based Reasoning for Behavior Modelling in Videogames Other Behaviour Learning Media and Communication Domain-experts Visual Exemplars User observation Explanation helpfulness, Intuitiveness, Perceived effectiveness, Perceived understanding, Predictability, Technology acceptance
Interactive Explanation with Varying Level of Details in an Explainable Scientific Literature Recommender System Neural Network Recommendation Education Non-experts Text, Visual Other Interactive feedback session, Interview Cognitive Load, Explanation helpfulness, Perceived control, Perceived quality, Satistfaction, Transparency, Trust, Usability
Interfaces for Explanations in Human-AI Interaction: Proposing a Design Evaluation Approach Unknown Multi-class Classification Education Generic Text, Visual Features Importance Controlled experiment Advice adherence of users
Investigating the importance of first impressions and explainable AI with interactive video analysis Neural Network Image Classification Other Generic Text Neurons Activation Controlled experiment Confidence, Explanation helpfulness, Perceived effectiveness, Task performance
Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: An Explanation User Interface Proposition and User Study Ensemble Binary classification Finance/Economics Non-experts Text, Visual Counter-exemplars/factual Controlled experiment Satistfaction, Understandability
Investigating the understandability of XAI methods for enhanced user experience: When Bayesian network users became detectives Probabilistic Estimation Other Non-experts Text, Visual None Survey Perceived effectiveness, Plausibility, Understandability
LIMEADE: From AI Explanations to Advice Taking Neural Network Text Classification Recommendation Domain-experts Text Features Importance User observation Intuitiveness, Perceived control, Perceived effectiveness, Perceived quality, Transparency, Trust
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis Neural Network Multimodal Classification Natural Language Processing Domain-experts Visual Decision Tree, Features Importance, Shapley Values Interview Usability
Making SHAP Rap: Bridging Local and Global Insights Through Interaction and Narratives Ensemble Binary classification Finance/Economics Non-experts Text, Visual Shapley Values Usability study Explainability, Understandability
Meaningful Explanation Effect on User‚ Trust in an AI Medical System: Designing Explanations for Non-Expert Users Unknown Estimation Health Domain-experts, Non-experts Text, Visual Salient Mask Controlled experiment, Interview Trust
Mental Models of Mere Mortals with Explanations of Reinforcement Learning Reinforcement Learning Multi-class Classification Media and Communication Domain-experts, Non-experts Visual Salient Mask Usability study Perceived understanding, Workload
On Selective, Mutable and Dialogic XAI: A Review of What Users Say about Different Types of Interactive Explanations None None
On the Importance of User Backgrounds and Impressions: Lessons Learned from Interactive AI Applications Neural Network Object detection Other Generic Visual Other Controlled experiment Frequency of usage, Helpfulness, Perceived effectiveness
Preference Elicitation in Interactive and User-centered Algorithmic Recourse: An Initial Exploration Neural Network Binary classification Finance/Economics Non-experts Text Counter-exemplars/factual Controlled experiment Usability
QuestionComb: A Gamification Approach for the Visual Explanation of Linguistic Phenomena through Interactive Labeling Rule-Based Text Classification Natural Language Processing Domain-experts Visual Decision Rules Usability study Perceived effectiveness, Transparency, Usability, Usefulness
Questioning the ability of feature-based explanations to empower non-experts in robo-advised financial decision-making Rule-Based Recommendation Finance/Economics Non-experts Text, Visual Features Importance Controlled experiment, Interview Cognitive Load, Engagement, Reliance on AI, Trust, Understandability
Questioning the AI: Informing Design Practices for Explainable AI User Experiences None None
RADAR-X: An Interactive Mixed Initiative Planning Interface Pairing Contrastive Explanations and Revised Plan Suggestions Other Domain-experts Natural Language, Text Counter-exemplars/factual
Rapid Assisted Visual Search: Supporting Digital Pathologists with Imperfect AI Neural Network Image Classification Health Domain-experts Text, Visual Other Interview Model effectiveness, Task performance
Reinforcement Learning over Sentiment-Augmented Knowledge Graphs towards Accurate and Explainable Recommendation Reinforcement Learning Recommendation Recommendation Non-experts Text Other Survey Informativeness (ResQue questionnaire), Persuasiveness, Transparency, Trust, Usability
Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis Diagnosis Neural Network Estimation Health Domain-experts Text, Visual Counter-exemplars/factual, Features Importance Interview, UI inspection Usability
RuleMatrix: Visualizing and Understanding Classifiers with Rules Neural Network Multi-class Classification General Non-experts Visual Decision Rules Usability study Usability
Sand-in-the-Loop: Investigating Embodied Co-Creation for Shared Understandings of Generative AI None Domain-experts Tangible Other Controlled experiment Learnability, Predictability, Satistfaction, Usability, Workload
Self-Explaining Abilities of an Intelligent Agent for Transparency in a Collaborative Driving Context Reinforcement Learning Estimation Artificial Intelligence and Robotics Systems, Media and Communication Domain-experts Text, Visual Decision Tree User observation Task performance, Workload
Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making Unknown Regression Other Domain-experts Text, Visual Shapley Values Controlled experiment, Interview, User observation Confidence, Helpfulness, Trust
Slide to Explore 'What If': An Analysis of Explainable Interfaces
Supporting Exploratory Search with a Visual User-Driven Approach Math Recommendation Recommendation Non-experts Visual None Controlled experiment, User observation Task performance, Usability, Workload
Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations Ensemble Binary classification Finance/Economics Generic Visual Counter-exemplars/factual, Exemplars, Other, Shapley Values Controlled experiment Perceived effectiveness, Reliance on AI, Task performance
Tangible Explainable AI-an Initial Conceptual Framework Health Tangible Counter-exemplars/factual, Decision Rules, Partial Dependency Plot, Shapley Values
The Flow of Trust: A Visualization Framework to Externalize, Explore, and Explain Trust in ML Applications None None
The grammar of interactive explanatory model analysis Ensemble Binary classification Artificial Intelligence and Robotics Systems, Health AI experts Visual Partial Dependency Plot, Shapley Values Survey Confidence, Explanation helpfulness, Perceived effectiveness
The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems None None
The Use of Responsible Artificial Intelligence Techniques in the Context of Loan Approval Processes Ensemble, Probabilistic, White Box Binary classification Finance/Economics AI experts, Domain-experts Text, Visual Shapley Values Controlled experiment, Interview Explanation quality, Fairness rating, Perceived effectiveness, Reliance on AI, Trust, Usability
TimeTuner: Diagnosing Time Representations for Time-Series Forecasting with Counterfactual Explanations Neural Network Estimation General AI experts Visual Counter-exemplars/factual Interview, UI inspection Usability
Toward Involving End-Users in Interactive Human-in-the-Loop AI Fairness White Box Regression Finance/Economics Generic, Non-experts Text, Visual Exemplars Co-design, User observation Fairness rating, Workload
Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations
Towards Sonification in Multimodal and User-FriendlyExplainable Artificial Intelligence Agnostic General Audio Counter-exemplars/factual, Exemplars, Neurons Activation
TRIVEA: Transparent Ranking Interpretation using Visual Explanation of black-box Algorithmic rankers Other Ranking Education AI experts Visual Features Importance, Partial Dependency Plot Survey Usability
Unraveling ML Models of Emotion With NOVA: Multi-Level Explainable AI for Non-Experts Neural Network Image Classification Natural Language Processing Non-experts Visual Features Importance Controlled experiment Computer Self-Efficacy, Trust, Workload
User Characteristics in Explainable AI: The Rabbit Hole of Personalization? Neural Network Binary classification Media and Communication, Natural Language Processing Generic Text, Visual Features Importance Controlled experiment Perceived understanding, Personality, Trust, Understandability
User-Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature Review
VAC-CNN: A Visual Analytics System for Comparative Studies of Deep Convolutional Neural Networks Neural Network Image Classification Network AI experts Visual Salient Mask Controlled experiment Ease of use, Helpfulness, Task performance
VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models Ensemble, Probabilistic Multi-class Classification Health Domain-experts Visual Shapley Values Interview, User observation
Video-based AI Decision Support System for Lifting Risk Assessment White Box Image Classification Health Domain-experts, Non-experts Text, Visual Shapley Values Survey Confidence, Task performance
VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-Making Probabilistic Binary classification Mobility AI experts Visual Other User observation Usability
VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users Neural Network Estimation General Domain-experts Visual Other Focus Group Understandability
Visual Analytics for Exploring Air Quality Data in an AI-Enhanced IoT Environment Ensemble Regression Other Visual Shapley Values
Visual Analytics for Human-Centered Machine Learning None None
Visual Exploration of Machine Learning Model Behavior with Hierarchical Surrogate Rule Sets Rule-Based Agnostic General Domain-experts Visual None Usability study, User observation Task performance, Workload
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference Neural Network Estimation Natural Language Processing Text, Visual Counter-exemplars/factual No study
Visual, Textual or Hybrid: The Effect of User Expertise on Different Explanations Agnostic Regression Other AI experts, Non-experts Text, Visual Counter-exemplars/factual, Features Importance, Partial Dependency Plot Controlled experiment Task performance, Understandability, Usability
VMS: Interactive Visualization to Support the Sensemaking and Selection of Predictive Models Ensemble, Neural Network, White Box Regression Health Non-experts Visual Features Importance, Shapley Values Controlled experiment, User observation Perceived effectiveness, Satistfaction, Understandability
What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice None None
What Is the Focus of XAI in UI Design? Prioritizing UI Design Principles for Enhancing XAI User Experience Unknown Multi-class Classification Health Non-experts Natural Language Features Importance Controlled experiment, Interview Perceived efficiency, Persuasiveness, Satistfaction, Trust, Understandability
What’s on Your Mind, NICO?: XHRI: A Framework for eXplainable Human-Robot Interaction Neural Network Image Classification Artificial Intelligence and Robotics Systems Natural Language, Non-verbal Explanation, Verbal Explanation, Visual None
Where Are We and Where Can We Go on the Road to Reliance-Aware Explainable User Interfaces?
Why? Why not? When? Visual Explanations of Agent Behaviour in Reinforcement Learning Neural Network, Reinforcement Learning Behaviour Learning Artificial Intelligence and Robotics Systems, Health Non-experts Text, Visual Features Importance Usability study Understandability
XAI for Learning: Narrowing down the Digital Divide between “new”and “old”Experts None None
XAIT: An Interactive Website for Explainable AI for Text Text Classification Natural Language Processing Interview
XDesign: Integrating Interface Design into Explainable AI Education Neural Network Image Classification Education AI experts Text, Visual Features Importance Usability study
XplainScreen: Unveiling the Black Box of Graph Neural Network Drug Screening Models with a Unified XAI Framework Neural Network Estimation Health Domain-experts Text, Visual Neurons Activation, Salient Mask Interactive feedback session Usefulness