TY - GEN
T1 - Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model
AU - Abbaspour Onari, Mohsen
AU - Grau, Isel
AU - Nobile, Marco S.
AU - Zhang, Yingqian
PY - 2023/8
Y1 - 2023/8
N2 - This empirical study proposes a novel methodology to measure users’ perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users’ mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretable Machine Learning (ML) model to classify suspected COVID-19 patients into positive or negative cases. Then, Medical Experts (MEs) conduct a diagnostic decision-making task based on their knowledge and the predictions and interpretations provided by the XAI model. In order to evaluate the impact of interpretations on perceived trust, explanation satisfaction attributes are rated by MEs through a survey. Then, they are considered as FCM’s concepts to determine their influences on each other and, ultimately, on the perceived trust. Moreover, to consider MEs’ mental subjectivity, fuzzy linguistic variables are used to determine the strength of influences. After reaching the steady state of FCMs, a quantified value is obtained to measure the perceived trust of each ME. The results show that the quantified values can determine whether MEs trust or distrust the XAI model. We analyze this behavior by comparing the quantified values with MEs’ performance in completing diagnostic tasks.
AB - This empirical study proposes a novel methodology to measure users’ perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users’ mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretable Machine Learning (ML) model to classify suspected COVID-19 patients into positive or negative cases. Then, Medical Experts (MEs) conduct a diagnostic decision-making task based on their knowledge and the predictions and interpretations provided by the XAI model. In order to evaluate the impact of interpretations on perceived trust, explanation satisfaction attributes are rated by MEs through a survey. Then, they are considered as FCM’s concepts to determine their influences on each other and, ultimately, on the perceived trust. Moreover, to consider MEs’ mental subjectivity, fuzzy linguistic variables are used to determine the strength of influences. After reaching the steady state of FCMs, a quantified value is obtained to measure the perceived trust of each ME. The results show that the quantified values can determine whether MEs trust or distrust the XAI model. We analyze this behavior by comparing the quantified values with MEs’ performance in completing diagnostic tasks.
UR - https://sites.google.com/view/xai2023/home
M3 - Conference contribution
SP - 1
EP - 9
BT - Proceedings of the IJCAI 2023 Workshop on Explainable Artificial Intelligence
PB - International Joint Conferences on Artificial Intelligence (IJCAI)
T2 - 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023
Y2 - 19 August 2023 through 25 August 2023
ER -