Abstract | In recent years, the use of Artificial Intelligence agents to augment and enhance the operational decision making of human agents has increased. This has delivered real benefits in terms of improved service quality, delivery of more personalised services, reduction in processing time, and more efficient allocation of resources, amongst others. However, it has also raised issues which have real-world ethical implications such as recommending different credit outcomes for individuals who have an identical financial profile but different characteristics (e.g., gender, race). The popular press has highlighted several high-profile cases of algorithmic discrimination and the issue has gained traction. While both the fields of ethical decision making and Explainable AI (XAI) have been extensively researched, as yet we are not aware of any studies which have examined the process of ethical decision making with Intelligence augmentation (IA). We aim to address that gap with this study. We amalgamate the literature in both fields of research and propose, but not attempt to validate empirically, propositions and belief statements based on the synthesis of the existing literature, observation, logic, and empirical analogy. We aim to test these propositions in future studies. |
---|