Will algorithms blind people? The effect of explainable AI and decision-makers' experience on AI-supported decision- making in government

Janssen, M., Hartog, M., Ricardo, M., Ding, A. and Kuk, G. ORCID: 0000-0002-1288-3635, 2020. Will algorithms blind people? The effect of explainable AI and decision-makers' experience on AI-supported decision- making in government. Social Science Computer Review. ISSN 0894-4393 (Forthcoming)

[img] Text
1385111_Kuk.pdf - Accepted version
Restricted to Repository staff only

Download (370kB)

Abstract

Computational artificial intelligence (AI) algorithms are increasingly used to support decision-making by governments. Yet algorithms often remain opaque to the decision-makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision-making in three situations: humans making decisions 1) without any support of algorithms, 2) supported by business rules (BR) and 3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, whilst BR and ML algorithms could provide correct or incorrect suggestions to the decision-maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision-makers to make more correct decisions. The findings suggest that explainable AI combined with experience, helps them detect incorrect suggestions made by algorithms. However, even experienced persons were not able to identify all mistakes. Ensuring the ability to understand and traceback decisions are not sufficient for avoiding making incorrect decisions. The findings imply that algorithms should be adopted with care and that selecting the appropriate algorithms for supporting decisions and training of decision-makers are key factors in increasing accountability and transparency. Abstract Computational artificial intelligence (AI) algorithms are increasingly used to support decision-making by governments. Yet algorithms often remain opaque to the decision-makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision-making in three situations: humans making decisions 1) without any support of algorithms, 2) supported by business rules (BR) and 3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, whilst BR and ML algorithms could provide correct or incorrect suggestions to the decision-maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision-makers to make more correct decisions. The findings suggest that explainable AI combined with experience, helps them detect incorrect suggestions made by algorithms. However, even experienced persons were not able to identify all mistakes. Ensuring the ability to understand and traceback decisions are not sufficient for avoiding making incorrect decisions. The findings imply that algorithms should be adopted with care and that selecting the appropriate algorithms for supporting decisions and training of decision-makers are key factors in increasing accountability and transparency.

Item Type: Journal article
Publication Title: Social Science Computer Review
Creators: Janssen, M., Hartog, M., Ricardo, M., Ding, A. and Kuk, G.
Publisher: SAGE Publications
Date: 7 October 2020
ISSN: 0894-4393
Identifiers:
NumberType
1385111Other
Divisions: Schools > Nottingham Business School
Record created by: Linda Sullivan
Date Added: 05 Nov 2020 10:50
Last Modified: 05 Nov 2020 10:50
URI: http://irep.ntu.ac.uk/id/eprint/41512

Actions (login required)

Edit View Edit View

Views

Views per month over past year

Downloads

Downloads per month over past year