Learn from user feedback
- AI Model
- Neural Network Probabilistic Unknown White Box
- AI Task
- Multi-class Classification Regression Text Classification
- Application Domain
- Finance/Economics Media and Communication Recommendation
- Type of Users
- Domain-experts Generic Non-experts
- Explanation Modality
- Natural Language Text Visual
- XAI Model
- Counter-exemplars/factual Exemplars Features Importance None
Related Papers
- Riveiro, M., & Thill, S. (2021). “That’s (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems. Artificial Intelligence, 298, 103507. https://doi.org/10.1016/j.artint.2021.103507
- Wu, D., Tang, H., Bradley, C., Capps, B., Singh, P., Wyandt, K., Wong, K., Irvin, M., Agostinelli, F., & Srivastava, B. (2022). AI-Driven User Interface Design for Solving a Rubik’s Cube: A Scaffolding Design Perspective. In Lecture Notes in Computer Science (pp. 490–498). Springer International Publishing. https://doi.org/10.1007/978-3-031-17615-9_34
- Kim, S. S. Y., Watkins, E. A., Russakovsky, O., Fong, R., & Monroy-Hernández, A. (2023). “Help Me Help the AI”: Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–17). CHI ’23: CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3544548.3581001
- Lee, B. C. G., Downey, D., Lo, K., & Weld, D. S. (2023). LIMEADE: From AI Explanations to Advice Taking. ACM Transactions on Interactive Intelligent Systems, 13(4), 1–29. https://doi.org/10.1145/3589345
- Nakao, Y., Stumpf, S., Ahmed, S., Naseer, A., & Strappelli, L. (2022). Toward Involving End-users in Interactive Human-in-the-loop AI Fairness. ACM Transactions on Interactive Intelligent Systems, 12(3), 1–30. https://doi.org/10.1145/3514258
The system should learn from the information the user inputs into it [10.1016/j.artint.2021.103507].
User feedback design plays a key role in retaining users during the problem-solving process. Conversational user interfaces are needed to integrate more advanced human-AI interaction, natural language processing, machine learning and pedagogical design strategies to satisfy users’ needs [10.1007/978-3-031-17615-9_34].
Explanations can be used for calibrating trust, improving the user’s task skills, collaborating more effectively with AI, and giving constructive feedback to developers [10.1145/3544548.3581001].
The ability to provide high-level feedback can, in fact, significantly improves the user’s sense of trust and control and system transparency [10.1145/3589345].
The feedback to the system should be [10.1145/3514258]: 1) Actionable 2) Reversible 3) Honored 4) Shown to incremental changes