Just war and robots' killings

Thomas W. Simpson, Vincent C. Müller

Research output: Contribution to journalArticleAcademicpeer-review

46 Citations (Scopus)

Abstract

May lethal autonomous weapons systems-'killer robots'-be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-Aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the redistribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots' design and manufacture.

Original languageEnglish
Pages (from-to)302-322
Number of pages21
JournalThe Philosophical Quarterly
Volume66
Issue number263
DOIs
Publication statusPublished - 6 Jan 2016
Externally publishedYes

Keywords

  • killer robots
  • lethal automated weapons systems
  • responsibility trilemma
  • risk imposition
  • tolerance level.

Fingerprint

Dive into the research topics of 'Just war and robots' killings'. Together they form a unique fingerprint.

Cite this