Samenvatting
The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be joined as premises and the argument for the existential risk of AI turns out invalid. If the interpretation is incorrect and both premises use the same notion of intelligence, then at least one of the premises is false and the orthogonality thesis remains itself orthogonal to the argument to existential risk from AI. In either case, the standard argument for existential risk from AI is not sound.—Having said that, there remains a risk of instrumental AI to cause very significant damage if designed or used badly, though this is not due to superintelligence or a singularity.
Originele taal-2 | Engels |
---|---|
Pagina's (van-tot) | 25-36 |
Aantal pagina's | 12 |
Tijdschrift | Ratio |
Volume | 35 |
Nummer van het tijdschrift | 1 |
DOI's | |
Status | Gepubliceerd - mrt. 2022 |
Bibliografische nota
Publisher Copyright:© 2021 The Authors. Ratio published by John Wiley & Sons Ltd.
Financiering
We are very grateful to our colleagues in Eindhoven and Leeds for the opportunity to discuss our work and for their very constructive comments. Furthermore, we are grateful to reviewers for Analysis and Ratio, as well as to Nicholas Agar, Gabriela Arriagada-Bruneau, Stuart Armstrong, Zach Gudmudsen, Guido Löhr, Olle Häggström and Emma Ruttkamp for comments on earlier drafts.