Samenvatting
From straightforward interactions to full-fledged open-ended dialogues, Conversational User Interfaces (CUIs) are designed to support end-user goals and follow their requests. As CUIs become more capable, investigating how to restrict or limit their ability to carry out user requests becomes increasingly critical. Currently, such intentionally constrained user interactions are accompanied by a generic explanation (e.g., "I'm sorry, but as an AI language model, I cannot say..."). We describe the role of moral bias in such user restrictions as a potential source of conflict between CUI users' autonomy and system characterisation as generated by CUI designers. Just as the users of CUIs have diverging moral viewpoints, so do CUI designers - which either intentionally or unintentionally affects how CUIs communicate. Mitigating user moral biases and making the moral viewpoints of CUI designers apparent is a critical path forward in CUI design. We describe how moral transparency in CUIs can support this goal, as exemplified through intelligent disobedience. Finally, we discuss the risks and rewards of moral transparency in CUIs and outline research opportunities to inform the design of future CUIs.
Originele taal-2 | Engels |
---|---|
Titel | Proceedings of the 5th International Conference on Conversational User Interfaces, CUI 2023 |
Uitgeverij | Association for Computing Machinery, Inc |
ISBN van elektronische versie | 9798400700149 |
DOI's | |
Status | Gepubliceerd - 19 jul. 2023 |
Evenement | 5th Conference on Conversational User Interfaces, CUI 2023 - Eindhoven, Nederland Duur: 19 jul. 2023 → 21 jul. 2023 |
Congres
Congres | 5th Conference on Conversational User Interfaces, CUI 2023 |
---|---|
Land/Regio | Nederland |
Stad | Eindhoven |
Periode | 19/07/23 → 21/07/23 |
Financiering
This work is supported by the Carlsberg Foundation project ‘Algorithmic Explainability for Everyday Citizens’.