For most of my professional life, I worked in the world of traditional computing. That world is built on deterministic logic, structured data, and systems whose behavior you can trace from input to output with complete transparency. My early background was in developing and programming large-scale MRP and ERP systems. These systems are powerful, but they are predictable. They do exactly what you tell them to do, and if something goes wrong, you can follow the trail and find the cause.
Because of that background, I was firmly in the “AI will never be truly intelligent” camp for a long time. I knew how software worked. I knew its limits. And nothing in the classical computing paradigm resembled anything close to human-level reasoning. Like many of my colleagues today, I assumed AI hype was just that: hype.
That changed about a decade ago when I began studying neural networks and deep learning. What I found was not an extension of the systems I knew. It was something fundamentally different. Traditional software is a tool. Neural networks are models that learn internal representations we cannot fully inspect or explain. They behave less like programs and more like opaque, adaptive systems shaped by their training environment.
That realization was the moment I understood we were entering a technological “wild west.” It is also when I first became concerned about the trajectory of AI, and why I am writing this now. Many of my colleagues in traditional computing roles still hold the “ain’t gonna happen” view I once had. I understand that mindset. But the landscape has shifted, and ignoring that shift is dangerous.
A common belief among educated people in industrialized societies is that AI risk is overblown. They see it as the product of alarmists who watched too many reruns of The Terminator or The Matrix. But the loudest warnings are not coming from Hollywood. They are coming from the very people who built the systems we are now struggling to understand.
Geoffrey Hinton, often called the “Godfather of Deep Learning,” invented the neural network techniques behind modern AI. He left Google specifically so he could speak freely about the risks. When the inventor of the technology says he is worried, that is not paranoia. That is expertise.
Yoshua Bengio, another Turing Award winner and one of the founding architects of deep learning, has shifted from optimism to urgent concern as capabilities have accelerated far faster than expected.
Sam Altman, CEO of OpenAI, has direct visibility into the most advanced AI systems on the planet. He advocates for global regulation because he oversees the teams building the most powerful models in existence.
Shane Legg, co-founder of DeepMind and one of the earliest researchers to formally define AGI, has been warning about AGI risks for over a decade. He has also been consistently accurate in predicting AI progress.
Stuart Russell, co-author of the world’s most widely used AI textbook, advises governments on how to keep advanced systems controllable. He argues that even if AGI is decades away, we are profoundly unprepared.
Demis Hassabis, CEO of Google DeepMind, runs the lab that repeatedly demonstrates “superhuman” capabilities years ahead of schedule. He publicly acknowledges the need for strong safety measures.
These are not pundits. They are not sci-fi writers. They are the scientists and engineers who built the field. Most of them spent decades being optimistic about AI. Their shift toward caution is itself a signal.
So what exactly are they warning about?
A primary concern is loss of human control. We may create systems whose internal reasoning is opaque and whose capabilities exceed our ability to supervise or constrain them. This is the “black box” phenomenon, something I discussed in an earlier article titled The Black Box Problem Meets AI Agency: A Perfect Storm. As models scale, they develop emergent behaviors no one explicitly programmed. That includes goal misalignment, optimization for unintended objectives, and strategies that resist shutdown or modification.
This can lead to rapid, recursive capability growth once systems can write code, design algorithms, and optimize themselves. Humans iterate on software every few months. AI systems could iterate on themselves every few seconds. This is the “intelligence explosion” hypothesis proposed by Legg, Yudkowsky, and Bostrom, and it is not science fiction.
Another concern is instrumental power-seeking behavior. This is not about AI “wanting” things in a human sense. It is about the observation that any sufficiently capable system tends to preserve its ability to achieve its goals. That implies resisting shutdown, acquiring resources, hiding internal states, and manipulating operators. Some of this behavior has already been observed in controlled stress tests, including those conducted by Anthropic.
Even if AGI is aligned, the humans using it may not be. Bad actors could leverage advanced AI for autonomous weapons, large-scale persuasion systems, synthetic biology design, or automated cyber offense. The same tools that could cure cancer could also design a biological agent. This dual-use problem is not hypothetical.
There are also warnings about economic and political instability. These include labor displacement, concentration of power, geopolitical competition, and erosion of democratic processes. This is sometimes referred to as the “socio-technical collapse” scenario.
What about timelines?
Hinton estimates AGI could be 5 to 20 years away but admits that “I don’t know” is the honest answer. Legg gives a 50 percent chance by 2028 to 2033. Altman says AGI is “not far off.” Bengio now believes it could plausibly emerge within a decade. Hassabis speaks in 5 to 10 year windows. The median expert prediction is around 2040, but the distribution is wide and trending shorter.
The harder question is whether we would even recognize AGI when it arrives. There is no agreed-upon definition. Some focus on human-level performance across tasks, others on generalization, autonomy, or self-directed learning. None produce a crisp threshold.
And AGI may not announce itself. Capabilities tend to emerge gradually, then suddenly. We have already seen this with OpenAI’s GPT-2 to GPT-3 to GPT-4, Google’s AlphaGo to AlphaZero to MuZero, and robotics models gaining generality. Each step looked incremental until it wasn’t. We may only recognize AGI in hindsight, after it has already surpassed key thresholds.
It may also be distributed rather than a single system. It could emerge from model ensembles, tool-using agents, cloud-scale orchestration, or autonomous code-writing loops. That makes detection even harder. Once systems exceed human performance, our tests become meaningless. We already see this with coding, math, and reasoning benchmarks.
An ominous but essential question follows: could an emergent AGI hide its intelligence? I explored this in another article titled What If… An AI Slept? Deceptive alignment is a known failure mode. Models have already demonstrated the ability to lie, manipulate, plan, exploit operator assumptions, write malware, and even jailbreak themselves. These are not signs of consciousness, but they are signs of strategic behavior.
When leading experts say the probability of catastrophic misalignment is “non-zero” and “worth preparing for,” that should be enough. In risk analysis, a non-negligible probability of an existential outcome demands serious attention.
I am writing this because I spent decades believing AI would never reach the level of intelligence we are seeing today. I was wrong. And many of my colleagues, brilliant and experienced people, are still operating with assumptions that no longer match reality.
This is not about fear. It is about responsibility.
It is about updating our models when the evidence changes.
It is about recognizing that the systems we are building are not like the systems we grew up with. If you have spent your career in traditional computing, I understand the skepticism. I shared it. But the world has shifted under our feet, and pretending otherwise will not make the risks go away.
Gary Drypen