Veni beurs Elisabeth O'Neill The Artificial Ethicists

Project: Onderzoek direct

Projectdetails

Omschrijving

Imagine an artificial intelligence (AI) that could advise you on what to do—e.g., whether to become a pacifist; when to react to injustice; how to balance courage and caution. It could tell you what you should do, given your values and principles, or what you should do, given the set of values and principles that it thinks you should have. Drawing on machine learning and vast datasets on human actions, reactions, judgments, and theories, it has acquired, among other things, concepts of morality and obligation. I call this AI an artificial ethicist (AE). An AE would likely disagree with you sometimes—even about what your most fundamental values should be. In some cases, it might be able to explain why you should modify your values, but given the gulf between its abilities and yours, it is unlikely that you—or any human—could understand all its arguments. Recent advances in AI research give us compelling reasons to consider this kind of scenario now. Furthermore, AEs supply a new angle from which to consider foundational questions about human morality. This scenario raises a question: Under what conditions, if any, should one defer to the moral expertise of an artificial ethicist?
Drawing on analytic moral epistemology and recent work from AI researchers, I will address this question in three steps: 1. I will investigate what features AEs would likely have and in what ways AEs would be different from humans offering moral testimony or expertise. 2. I will examine our options for evaluating the reliability of the AE’s judgments. 3. I will assess whether there are epistemic or moral reasons for declining to defer to the expertise of an AE, even if one has good reasons to think the AE’s moral judgments are more reliable than one’s own.
StatusGeëindigd
Effectieve start/einddatum1/02/1931/12/22

Vingerafdruk

Verken de onderzoeksgebieden die bij dit project aan de orde zijn gekomen. Deze labels worden gegenereerd op basis van de onderliggende prijzen/beurzen. Samen vormen ze een unieke vingerafdruk.
  • Digital Wormholes

    O'Neill, E. (Corresponding author), dec. 2023, In: AI & Society. 38, 6, blz. 2713-2715 3 blz.

    Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

    Open Access
    Bestand
    30 Downloads (Pure)
  • Ethical Issues with Artificial Ethics Assistants

    O'Neill, E., Klincewicz, M. & Kemmer, M., 20 okt. 2022, The Oxford Handbook of Digital Ethics. Veliz, C. (uitgave). Oxford University Press, blz. C17.S1–C17.N26

    Onderzoeksoutput: Hoofdstuk in Boek/Rapport/CongresprocedureHoofdstukAcademicpeer review

  • Pistols, pills, pork and ploughs: The structure of technomoral revolutions

    Hopster, J. (Corresponding author), Arora, C., Blunden, C., Eriksen, C., Frank, L. E., Hermann, J., Klenk, M., O'Neill, E. & Steinert, S., 8 jul. 2022, (E-publicatie vóór gedrukte publicatie) In: Inquiry. XX, X

    Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

    Open Access
    19 Citaten (Scopus)