Let the user ask questions to the system
- AI Model
- Neural Network None Probabilistic Reinforcement Learning White Box
- AI Task
- Behaviour Learning Binary classification Regression Text Classification
- Application Domain
- Artificial Intelligence and Robotics Systems Finance/Economics General Health Media and Communication
- Type of Users
- AI experts Domain-experts Generic Non-experts
- Explanation Modality
- Text Visual
- XAI Model
- Counter-exemplars/factual Exemplars Features Importance
Related Papers
- Riveiro, M., & Thill, S. (2021). “That’s (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems. Artificial Intelligence, 298, 103507. https://doi.org/10.1016/j.artint.2021.103507
- Hohman, F., Head, A., Caruana, R., DeLine, R., & Drucker, S. M. (2019). Gamut. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). CHI ’19: CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3290605.3300809
- Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941. https://doi.org/10.1016/j.ijhcs.2022.102941
- Mishra, A., Soni, U., Huang, J., & Bryan, C. (2022). Why? Why not? When? Visual Explanations of Agent Behaviour in Reinforcement Learning. In 2022 IEEE 15th Pacific Visualization Symposium (PacificVis) (pp. 111–120). 2022 IEEE 15th Pacific Visualization Symposium (PacificVis). IEEE. https://doi.org/10.1109/pacificvis53943.2022.00020
The system should let the user conversate with it to ask inquiries as one would with chatbots and current personal assistant devices such as Siri or Alexa [10.1016/j.artint.2021.103507, 10.1109/PacificVis53943.2022.00020].
This can also be done in the form of what-if questions to show the user counterfactual examples [10.1145/3290605.3300809]. These can let the user observe the effect that modified features have on the model’s prediction. Examples: “Given a house and its predicted price of $250,000, how would the price change if it had an extra bedroom?” or “Given a house and its predicted price of $250,000, what would I have to change to increase its predicted price to $300,000?”
Follow-up questions can arise after reading explanations coming from AI; this need is called Multi-step explainability [10.1016/j.ijhcs.2022.102941]. Obtaining answers to follow-up questions can support the user in validating the AI recommendation. Therefore, the system should allow further inquiries from the user after presenting them with a main explanation.