A Human–AI Interaction Model: Leveraging Human-Centred Design for Symbiotic AI

Abstract

Artificial Intelligence (AI) systems are increasingly embedded in our everyday lives, ranging from consumer-facing applications such as chatbots and recommendation engines to applications in high-risk domains like healthcare and autonomous driving. While these systems promise improved efficiency, creativity, and decision-making for their human users, they also introduce significant challenges related to usability, transparency, trust, and alignment with human values. This dissertation addresses these challenges by advancing the field of Human-Centred Artificial Intelligence (HCAI) and its specialization, Symbiotic Artificial Intelligence (SAI), which envisions AI systems that augment rather than replace human capabilities. The main contribution of this thesis is a negotiation-based model of Human–AI Interaction, grounded in Human-Computer Interaction (HCI), Artificial Intelligence, Software Engineering, and Ethics. This model reconceptualizes interaction as a dynamic and adaptive process, enabling users to retain control, share decision-making authority, and iteratively influence AI behavior. The model is validated through empirical studies in multiple domains, spanning different levels of risk and technological paradigms, including traditional and Generative Artificial Intelligence (GenAI). These studies show that the effectiveness of AI systems is dependent not only on algorithmic performance but also on their ability to accommodate user goals, workflows, and mental models. Additional key contributions include: (i) a precise definition of HCAI and SAI, clarifying their theoretical foundations and solving terminological inconsistencies; (ii) an empirical demonstration of the necessity of Human-Centred Design (HCD) when developing AI systems that support their users; (iii) a structured set of case studies validating the Human–AI Interaction Model, spanning low-, medium-, and high-risk applications; and (iv) the derivation of actionable guidelines for the design, evaluation, and development of trustworthy, explainable, and user-aligned AI systems. Examples of such guidelines refer to designing explanation-driven interventions, strategies for eXplanation User Interfaces (XUIs), and techniques for aligning AI behavior with users’ mental models. The findings of the research reported in this thesis suggest that neither automation nor augmentation should be preferred by default; instead, design decisions must consider task characteristics, user goals, and contextual risk. Overall, the thesis provides a comprehensive and validated framework for designing AI systems that are not only technically powerful but also trustworthy, transparent, and genuinely supportive of human autonomy and decision-making.

Paper

Related Blog Posts