The AI Singularity — Existential Risk or Philosophical Fallacy?
On the friction of the slippery slope to superintelligence
In today’s popular culture and in particular on social media, we find quite opposing views whether AI is an existential risk or no significant harm. Much of the discussion is led in an opinionated fashion and the depths of the argument are only rarely presented in details. In this article, we want to summarise its roots from the “weak vs. strong AI” discussion to an in detail analysis of the main versions of the AI singularity argument. After discussing all the pros and cons, we will see that there is actually a quite clear answer to whether present day’s AI is an existential risk or not.
Weak vs. Strong AI
The Stanford Encyclopedia of Philosophy provides an excellent introduction to this decade spanning discussion in science and philosophy. In the following, we will summarize its main arguments:
Weak AI is concerned with the development of specialized systems capable of executing tasks that require human-level intelligence, albeit without the accompanying consciousness or…