Toward a Metric for Evaluating Trust in Human-Centered Artificial Intelligence
Andrea Esposito, Giuseppe Desolda, Rosa LanzilottiAbstract
Trust is a critical human factor in the adoption and effectiveness of artificial intelligence (AI) systems, especially in high-risk domains such as medicine and cybersecurity. This paper introduces a novel probabilistic metric to objectively quantify trust in AI systems, using the degree of reliance on AI by human users as a proxy. The proposed metric leverages changes in human decision-making influenced by AI decisions and can be computed using a flexible data-driven approach. To validate this metric, we present user study protocols applicable to diverse domains, providing examples in the medical field (for Alzheimer’s disease detection) and cybersecurity (for phishing detection). These protocols employ a frequentist definition of probability to estimate and compute trust, allowing for a comparison with trust questionnaires to establish correlations. This work aims to contribute a standardized, interpretable, and adaptable method for evaluating trust in human-centered AI systems.