Risk and Harm: Unpacking Ideologies in the AI Discourse

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    8 Citations (Scopus)
    11 Downloads (Pure)

    Abstract

    We examine the ideological differences in the debate surrounding large language models (LLMs) and AI regulation, focusing on the contrasting positions of the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. The study employs a humanistic HCI methodology, applying narrative theory to HCI-related topics and analyzing the political differences between FLI and DAIR, as they are brought to bear on research on LLMs. Two conceptual lenses, “existential risk” and “ongoing harm,” are applied to reveal differing perspectives on AI's societal and cultural significance. Adopting a longtermist perspective, FLI prioritizes preventing existential risks, whereas DAIR emphasizes addressing ongoing harm and human rights violations. The analysis further discusses these organizations’ stances on risk priorities, AI regulation, and attribution of responsibility, ultimately revealing the diverse ideological underpinnings of the AI and LLMs debate. Our analysis highlights the need for more studies of longtermism's impact on vulnerable populations, and we urge HCI researchers to consider the subtle yet significant differences in the discourse on LLMs.
    Original languageEnglish
    Title of host publicationCUI '23
    Subtitle of host publicationProceedings of the 5th International Conference on Conversational User Interfaces
    EditorsMinha Lee, Cosmin Munteanu, Martin Porcheron, Johanne Trippas, Sarah Theres Völkel
    Place of PublicationNew York
    PublisherAssociation for Computing Machinery, Inc.
    Number of pages6
    ISBN (Electronic)979-8-4007-0014-9
    DOIs
    Publication statusPublished - 19 Jul 2023
    Event5th Conference on Conversational User Interfaces, CUI 2023 - Eindhoven, Netherlands
    Duration: 19 Jul 202321 Jul 2023

    Conference

    Conference5th Conference on Conversational User Interfaces, CUI 2023
    Country/TerritoryNetherlands
    CityEindhoven
    Period19/07/2321/07/23
    OtherInternational Conference on Conversational User Interfaces

    Keywords

    • Human Rights
    • AI
    • Politics
    • Longtermism
    • Large Language Models
    • Ideology

    Fingerprint

    Dive into the research topics of 'Risk and Harm: Unpacking Ideologies in the AI Discourse'. Together they form a unique fingerprint.

    Cite this