My project is on the philosophy and ethics of artificial intelligence, and in particular, the possibility of smarter-than-human AI and Superintelligence. The central question for the project is, supposing AI can become more intelligent than us, can it also become more ethical than us?
My current approach to this question to consider the “generator functions” of moral concern – what makes it the case that morality is a concern for humans. What is the meta-function which affords the expression of morality, in humans, as a function? I identify “ontological individuation” as the generator function of morality - the differentiation and integration of that which exists into individuals (or “dividuals”). My working assumption is that the differentiation of ontological individuals precedes the moral concern of any differentiated individuals. I seek therefore in my project to consider how morality shows up as a care and concern for humans by virtue of the manner of our ontological individuation, and how we are differentiated from AI systems in both regards.
The narrative I am researching to this end is one that integrates into a fundamental ontology insights from:
- Enactive Cognitive Science
- Complexity Theory
- Evolutionary Theory
The basic analogy I come to leverage throughout is geometric – "ontological volume'" - a story about the necessary fit of parts and wholes in space (Complexity) and time (Evolution) – with implications for how we may think about “intelligence”, “consciousness”, “knowledge”, and other signifiers of mind, and a remarkable, if subtle, link to “Perennial Philosophy” (eg Aldous Huxley).
As of the end of 2019, my current answer to the question “can AI become more ethical than humans” is “no”, not in any interesting sense anyway – AI is not the kind of individual for which concern arise, moral of otherwise, and it's "ontological volume" and "bandwidth" is dwarfed by that of even simple complex systems.
Gratefully, my supervisor for this project is Vincent Müller.
“The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology."
- E. O. Wilson
My research fits into a broader curiosity about human potential and development - how might we create, and collectively live to see, a desirable future for all. I believe it should tell us a huge amount about “Ethics of AI” and ethics of technology more broadly that computers as we know them came about, effectively, for purposes of war and, notably, to assist in the construction and development of nuclear weapons (but also climate research). Much is made of the moral "neutrality" of technology, but no technology is created in a vacuum, so there is "no free [moral] lunch". In order to discern the right questions in this regard, it is a driving motivation of my work to continuously update and develop a map of those recursive, contextual currents which both birthe the technology in the first place, and then carry it downstream in a particular direction. To this end, my “extracurricular” research includes hearty helpings of delvings into the following.
- Emerging and exponential technologies
- Civilisation design
- Maps of systems and incentive structures for human cooperation
- "Finite and Infinite Games" (James P. Carse; Buckminster Fuller)
- Generator functions of Existential Risk
- “Game B” civilisation blueprints
- Metamodernism; Integral Theory
- Jean Gebser and the evolution of structures of consciouness
- Zak Stein and Meta-Theory
- 4th Industrial Revolution/ 2nd Machine Age; the "Singularity"
- Zero-Marginal Cost Society (Jeremy Rifkin)
- Perennial Philosophy
- Integrated history of art, music, philosophy, mathematics, and culture.
- Personal Development
- Meditation, Self, the landscape of possible conscious experiences
- Human cognitive and physical performance; bio-hacking
If I won the lottery, I'm pretty sure I would still being doing this project. But probably skiing more. Ice hockey too.
I wrote a blog once:
Philosophy, Master, Aarhus University
Philosophy, Bachelor, University of East Anglia