The AI Singularity — Existential Risk or Philosophical Fallacy?

On the friction of the slippery slope to superintelligence

Andreas Maier

--

The AI Singularity — Existential Risk or Philosophical Fallacy? Image created by DALL-E 3

In today’s popular culture and in particular on social media, we find quite opposing views whether AI is an existential risk or no significant harm. Much of the discussion is led in an opinionated fashion and the depths of the argument are only rarely presented in details. In this article, we want to summarise its roots from the “weak vs. strong AI” discussion to an in detail analysis of the main versions of the AI singularity argument. After discussing all the pros and cons, we will see that there is actually a quite clear answer to whether present day’s AI is an existential risk or not.

Watercolor painting of two chessboards. On the left, a sophisticated robot plays chess with a human, demonstrating the capabilities of strong AI. On the right, a basic robot arm hesitates over a move, representing the limitations of weak AI. The background is soft and pastel, evoking a sense of calm reflection. Image created by DALL-E 3

Weak vs. Strong AI

The Stanford Encyclopedia of Philosophy provides an excellent introduction to this decade spanning discussion in science and philosophy. In the following, we will summarize its main arguments:

Weak AI is concerned with the development of specialized systems capable of executing tasks that require human-level intelligence, albeit without the accompanying consciousness or understanding. In contrast, strong AI aspires to construct artificial entities on human level able to emulate human cognitive functions including consciousness. The main argument for the feasibility of strong AI is that the brain is essentially a compute device and therefore human-level intelligence must also be achievable on different compute devices such as personal computers.

A salient argument against the feasibility of Strong AI is the Gödelian Argument, which was initially proposed by J.R. Lucas and later elaborated upon by physicist Roger Penrose. This argument leverages Gödel’s incompleteness theorems to suggest that there are inherent limitations to what can be computed or formalized, thereby casting doubt on the notion that machines can ever achieve the full spectrum of human intelligence.

Hubert Dreyfus offers yet another critique, arguing that human expertise is not solely rooted in the symbolic manipulation of information. Dreyfus…

--

--

Andreas Maier

I do research in Machine Learning. My positions include being Prof @FAU_Germany, President @DataDonors, and Board Member for Science & Technology @TimeMachineEU