Member-only story

Are “AI Safety Experts” Fear-Mongering for their own Cause?

Understanding AI Safety NGOs and the “Escaping AI” Fear

Andreas Maier
28 min read4 days ago
Will AI break out of prison soon? Image created by DALL-E

Artificial intelligence (AI) safety NGOs have proliferated in recent years, especially in the US and EU. These organizations — such as FAR AI, the Future of Life Institute, the Machine Intelligence Research Institute, and the Center for AI Safety — claim to protect the public from AI risks. Their focus often includes extreme long-term scenarios like an AI “singularity” (a hypothetical point where AI surpasses human intelligence and escapes human control). In this post, we’ll examine who these groups are, where their funding and influence come from, and how credible their claims of rogue AI models “escaping” really are. We’ll break down the AI takeover risk by model type — especially large AI models edging toward general intelligence — and assess the technical feasibility of an AI spreading itself between machines. Along the way, we’ll look at some hard numbers (with sources) on model sizes, computational requirements, and comparisons to traditional computer viruses. Finally, we’ll consider how much public money is going into these AI safety efforts and whether their influence is warranted or just fear-mongering.

Who Are These AI Safety Organizations?

--

--

Andreas Maier
Andreas Maier

Written by Andreas Maier

I do research in Machine Learning and head a Research Lab at Erlangen University, Germany.

No responses yet