The AI Singularity — Existential Risk or Philosophical Fallacy?

On the friction of the slippery slope to superintelligence

Andreas Maier
10 min readOct 17, 2023
The AI Singularity — Existential Risk or Philosophical Fallacy? Image created by DALL-E 3

In today’s popular culture and in particular on social media, we find quite opposing views whether AI is an existential risk or no significant harm. Much of the discussion is led in an opinionated fashion and the depths of the argument are only rarely presented in details. In this article, we want to summarise its roots from the “weak vs. strong AI” discussion to an in detail analysis of the main versions of the AI singularity argument. After discussing all the pros and cons, we will see that there is actually a quite clear answer to whether present day’s AI is an existential risk or not.

Watercolor painting of two chessboards. On the left, a sophisticated robot plays chess with a human, demonstrating the capabilities of strong AI. On the right, a basic robot arm hesitates over a move, representing the limitations of weak AI. The background is soft and pastel, evoking a sense of calm reflection. Image created by DALL-E 3

Weak vs. Strong AI

The Stanford Encyclopedia of Philosophy provides an excellent introduction to this decade spanning discussion in science and philosophy. In the following, we will summarize its main arguments:

Weak AI is concerned with the development of specialized systems capable of executing tasks that require human-level intelligence, albeit without the accompanying consciousness or…

--

--

Andreas Maier

I do research in Machine Learning. My positions include being Prof @FAU_Germany, President @DataDonors, and Board Member for Science & Technology @TimeMachineEU