Provide Contextual Information
- AI Model
- Ensemble Neural Network None Probabilistic Unknown
- AI Task
- Binary classification Embedding Multi-class Classification
- Application Domain
- Health Network
- Type of Users
- AI experts Domain-experts Non-experts
- Explanation Modality
- Natural Language Text Visual
- XAI Model
- Counter-exemplars/factual Decision Rules Exemplars Features Importance Neurons Activation Salient Mask Sensitivity Analysis Shapley Values
Related Papers
- Cheng, F., Liu, D., Du, F., Lin, Y., Zytek, A., Li, H., Qu, H., & Veeramachaneni, K. (2022). VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models. IEEE Transactions on Visualization and Computer Graphics, 28(1), 378–388. https://doi.org/10.1109/tvcg.2021.3114836
- Hwang, J., Lee, T., Lee, H., & Byun, S. (2022). A Clinical Decision Support System for Sleep Staging Tasks With Explanations From Artificial Intelligence: User-Centered Design and Evaluation Study. Journal of Medical Internet Research, 24(1), e28659. https://doi.org/10.2196/28659
- Bhattacharya, A., Stumpf, S., Gosak, L., Stiglic, G., & Verbert, K. (2024). EXMOS: Explanatory Model Steering through Multifaceted Explanations and Data Configurations. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1–27). CHI ’24: CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3613904.3642106
- Fujiwara, T., Zhao, J., Chen, F., & Ma, K.-L. (2020). A Visual Analytics Framework for Contrastive Network Analysis. In 2020 IEEE Conference on Visual Analytics Science and Technology (VAST) (pp. 48–59). 2020 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE. https://doi.org/10.1109/vast50239.2020.00010
- Lei, D., He, Y., & Zeng, J. (2024). What Is the Focus of XAI in UI Design? Prioritizing UI Design Principles for Enhancing XAI User Experience. In Lecture Notes in Computer Science (pp. 219–237). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-60606-9_13
- Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941. https://doi.org/10.1016/j.ijhcs.2022.102941
- Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–15). CHI ’19: CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3290605.3300831
Placing explanations together with additional contextual information enhances their relevance and interpretability to domain-expert users, e.g., physicians [10.1109/TVCG.2021.3114836,10.2196/28659], AI engineers [10.1145/3613904.3642106,10.1109/VAST50239.2020.00010], etc. Moreover, the depth of information in explanations should be adaptable to the user’s expertise, ensuring that both detailed and summary information are available without overwhelming the user [10.1007/978-3-031-60606-9_13, 10.1016/j.ijhcs.2022.102941, 10.1145/3290605.3300831]