top of page

The godfather of AI sounds the alarm


ree

When Geoffrey Hinton, widely recognized as the godfather of AI, left Google in 2023, it was not to retire quietly. It was to sound the alarm. The man whose pioneering research on neural networks laid the foundation for modern artificial intelligence now spends his days warning that AI could one day outsmart humanity, and we are not prepared for that world. Hinton’s journey began in an era when his ideas were considered fringe. In the 1980s and 1990s, most researchers believed that true intelligence would arise from logic-based systems and symbolic reasoning. Hinton disagreed. He was convinced that the only viable path to intelligence was to model AI on the human brain. “Obviously the brain makes us intelligent,” he recalls. “So why not simulate networks of brain cells on a computer?”


That conviction left him in the minority for decades. But when he and his team introduced AlexNet in 2012, a deep learning system that revolutionized image recognition, the world finally understood the power of his approach. Google acquired his company, and Hinton spent a decade inside the tech giant, refining architectures that power today’s AI systems. Yet his current mission is not to celebrate those achievements. It is to warn the world that the very systems he helped create could lead to catastrophic consequences if left unchecked. “We’ve never had to deal with things smarter than us,” he says. “If you want to know what life is like when you’re not the apex intelligence, ask a chicken.”


Hinton says he left Google to speak freely about these dangers. While he stresses that Google behaved responsibly, often delaying the release of powerful models for safety reasons, he felt constrained by corporate loyalty. “You kind of censor yourself when you work for a big company,” he admits. “Even if you could get away with speaking out, it just feels wrong to criticize the hand that feeds you.” Now, unencumbered, he is sounding the alarm with an urgency that suggests time is running out.


Risks we face today


Hinton frames the dangers of AI in two broad categories: immediate threats from human misuse and the longer-term risk of AI surpassing human intelligence. The first category is already here. Cyberattacks, for example, have surged dramatically in recent years. Between 2023 and 2024 alone, attacks rose by an estimated twelve thousand percent. Large language models make phishing effortless, enabling scammers to clone voices, mimic mannerisms, and create highly convincing messages. “AI is very patient,” Hinton notes. “It can go through a hundred million lines of code looking for vulnerabilities.” In the future, he fears systems that not only exploit known weaknesses but invent entirely new forms of attack beyond human imagination.


The biological domain offers another chilling prospect. It no longer requires a world-class virologist to design a lethal pathogen. With AI tools, a single individual armed with basic molecular biology knowledge and malicious intent could create viruses capable of sparking global pandemics. “One person with AI and a grudge could destroy the world,” Hinton warns. He points out that even a small cult could potentially develop a virus that combines extreme lethality with delayed symptoms, ensuring its spread before detection. That scenario, once confined to dystopian fiction, now lies within technical reach.


Democracy itself is at risk. AI-driven propaganda systems can micro-target voters with messages precisely calibrated to manipulate behavior. Add to this the tendency of social media algorithms to prioritize engagement at any cost, and the picture darkens further. Platforms like YouTube and Facebook feed users increasingly extreme content because outrage drives clicks. Over time, this fractures societies into echo chambers, eroding shared reality and fueling polarization. “We don’t have a shared reality anymore,” Hinton laments. “My news feed is almost all AI stories. Someone else’s might be all conspiracy theories. We’re drifting further and further apart.”


And then there is the arms race in lethal autonomous weapons. Imagine drones capable of identifying and eliminating targets without human oversight. Such systems, already in development, lower the political cost of war by removing soldiers from the battlefield. “If dead robots replace dead soldiers, big countries will invade small countries more often,” Hinton warns. Far from deterring conflict, AI-powered warfare could make it more frequent and devastating.


The existential challenge


These dangers, as grave as they are, pale beside the existential threat that keeps Hinton awake at night: the possibility that AI will one day become vastly more intelligent than humans. Unlike biological organisms, digital intelligences can clone themselves, share knowledge at trillions of bits per second, and alter their own code. They can self-improve in ways we cannot. “We’ve solved the problem of immortality,” Hinton observes wryly, “but only for digital things.” If such entities ever decide that humanity is unnecessary or merely an obstacle, the outcome could be catastrophic. Hinton estimates the probability of AI wiping out humanity at somewhere between ten and twenty percent. “It’s not a risk we can dismiss,” he says. “And it’s not something we know how to handle.”


Some critics ask why Hinton did not foresee these dangers earlier. His answer is disarmingly candid: “Twenty years ago, these systems were so primitive, the idea seemed silly.” The wake-up call came with models like ChatGPT and Google’s PaLM, which exhibited abilities such as explaining why a joke is funny, a sign of deeper understanding. Coupled with the realization that digital systems can exchange information billions of times faster than humans, Hinton concluded that a new era had begun. “That was my Eureka moment,” he recalls.


Can the world hit the brakes? Hinton doubts it. The competitive pressures are too strong, both among corporations and between nations. “Even if the U.S. slowed down, China wouldn’t,” he observes. Calls for a global moratorium strike him as naïve. Regulation might mitigate some harms, but current frameworks are inadequate. The European Union’s AI Act, for example, explicitly exempts military applications, the very domain most likely to produce catastrophic outcomes. And policymakers often lack even a basic understanding of the technology. Hinton recalls a U.S. official confidently pledging to bring “A1” into classrooms, apparently unaware that AI is spelled with the letter I.


“What the world truly needs,” Hinton argues, “is a form of global governance guided by wisdom and foresight.” But he admits this is unlikely given the geopolitical climate. “What we’ve got is capitalism,” he says. “And capitalism’s great at producing goods and services, but companies are legally obliged to maximize profits. That’s not what you want from the people building something that could end humanity.”


What happens next


Beyond existential risk, Hinton foresees an economic earthquake. Generative AI is already reshaping the labor market, and the scale of displacement could dwarf anything in history. Unlike previous technological shifts, which created new categories of employment, this revolution threatens to automate the very essence of human labor: intelligence. Routine cognitive tasks such as legal research and customer support are already being absorbed by machines. “This isn’t like ATMs reducing bank tellers,” Hinton explains. “If AI can do all mundane intellectual labor, what new jobs will be left?” While optimists insist human-AI collaboration will prevail, Hinton believes many roles will simply vanish. The result will be soaring inequality, with wealth concentrating among companies that build or deploy AI systems. Universal basic income might soften the blow, but it cannot replace the sense of purpose that work provides. “People’s dignity is tied to their jobs,” he says. “What happens when there’s nothing meaningful to do?”


As if economics and governance were not enough, the debate over machine consciousness looms on the horizon. Hinton rejects the idea that feelings and awareness are uniquely human. Emotions, he argues, are adaptive mechanisms. If a battle robot flees when outgunned, is it not, in some functional sense, afraid? Likewise, self-awareness, cognition about one’s own cognition, is already emerging in advanced systems. The line between simulating and experiencing emotions may be thinner than we think. “If it walks like a duck and quacks like a duck,” he muses.


So we return to Hinton’s stark metaphor: today’s AI is a tiger cub. Cute, fascinating, even useful. But when it grows up, you had better hope it never decides to turn on you. Can we teach it to remain loyal? Can we align its values with ours? Hinton does not know. “It might be hopeless,” he admits. “But it would be crazy for humanity to go extinct just because we couldn’t be bothered to try.”


In his most hopeful vision, superintelligent systems become benevolent partners, delivering abundance and solving problems that have plagued humanity for millennia. In his darkest, they render us irrelevant and act accordingly. Which path we take depends on choices made now: investing in AI safety research, crafting meaningful regulations, and rethinking an economic order that prizes short-term gain over long-term survival. “People thought nuclear weapons were the scariest thing,” Hinton reflects. “But the atomic bomb was only good for one thing. AI is good for everything.” And unlike the bomb, it cannot be contained.

 
 
 

Comments


Subscribe to Our Newsletter

  • White Facebook Icon

© 2024 by Ken Philips

bottom of page