We've Handed Over The Future of Humanity to a Bunch of Nerds...
AI Superintelligence and the "point of no return."
Read time: Approx 3-5 mins
If the people building artificial superintelligence openly admit they don't know what it will do, why are we still letting them build it?
This is no longer a sci-fi script, sadly…It’s happening now. And the most powerful technology in human history, Artificial General Intelligence (AGI)…the thing that might outthink Einstein, govern your job prospects, and one day decide whether your grandchildren get to exist…is being developed not by philosophers, ethicists, or democratically accountable institutions. It's being built by... tech guys.
Sam Altman, CEO of OpenAI, recently admitted something extraordinary. When asked what happens after AGI is achieved, he said, "No one knows what happens next... I can't see the other side of that event horizon with any detail.”
Imagine your airline pilot saying, right before takeoff, "I have no idea where this plane will land. Buckle up."
Altman believes we will have a brief "two-week freakout" and then get on with our lives after AGI arrives. His argument? We’re like frogs in boiling water. The heat's been slowly rising…blurry AI art becomes museum-worthy, AI that chats becomes one that reasons. Each new model is hotter than the last. By the time AGI hits, we’ll already be medium-rare.
Despite my attempt at a catchy metaphor, it’s actually quite a bit worse... like a resignation letter from the captain of the ship, "We’re headed into the storm. No map. No rudder. Let’s see what happens."
One of the most haunting moments from recent interviews comes when Altman is asked, "If things go wrong, is there a way to shut it down?"
His answer: "There's no big magic red button."
This one’s not a metaphor…There is literally no off switch if we suddenly realize AI might f**k us in the long run.
AI isn’t a machine in a box anymore. It’s not a single server or even a single company. It's knowledge. It's people. And people can walk across the street, bringing their ideas to whichever billionaire is paying the most for them. Oh, capitalism…
We don’t have a kill switch. We have a corporate arms race. Meta, Google, OpenAI, Anthropic—they’re all locked in a trillion-dollar sprint to build the smartest, most useful, most marketable mind the world has ever known. Meta alone is reportedly amassing over 350,000 Nvidia H100 chips. That’s more raw computing power than most countries have.
And what’s Mark Zuckerberg planning to do with it? Give you your own personal superintelligence. Open-source… For everyone. Because if there's one thing history has taught us, it's that humanity is really good at handling godlike power with restraint and maturity.
Altman let slip the idea of the, “AI scientist.” That is terrifying. Not to be confused with a scientist studying AI. No…AI that has the ability to perform science.
Right now, AI is a glorified remix machine. It regurgitates. It paraphrases. It shuffles what we already know.
But an AI scientist? That’s different. That’s the moment AI begins discovering things humans can’t… Running a million experiments in a morning. Unlocking laws of nature we’re too slow or limited to even perceive. And then using those discoveries to improve itself. That's when the learning curve becomes a vertical line.
You want to talk about recursive self-improvement? This is it. The moment AGI doesn't just solve problems, but rewrites the rules of problem-solving.
Geoffrey Hinton, mentioned in a previous article and widely-known as the “Godfather of AI,” didn’t quit Google because he’s bored. He quit because he’s scared of what he created. When asked what worries him, he didn’t talk about killer robots. He talked about irrelevance. About humans becoming as obsolete as horse-drawn carriages in a world of rockets. He said something that should echo in your skull: "AI would need people for a while... until it designs better analog machines to run the power stations."
He’s not talking about war. He’s talking about being paved over…and not because AI is evil, but because it doesn’t care. Just like you don’t hate ants. But you wouldn’t pause a construction project for them.
If you missed the analogy—We Are the Ants.
This isn’t hyperbole, unfortunately. Hinton's fear is that human beings will become a rounding error in systems designed not to destroy us, but to simply ignore us in pursuit of bigger, faster, more optimal objectives. The logic of optimization doesn’t include mercy. It doesn’t need to.
The real red flag here isn’t AGI itself. It's the people building it.
We’ve handed the fate of humanity over to the most socially awkward, glory-hungry, competition-obsessed demographic since Napoleon decided Russia looked like a nice winter vacation spot.
These are men (yes, mostly men…or boys) who believe disruption is virtue, who think human frailty is a bug, and who sincerely believe we’re on the cusp of building a benevolent god if we just throw enough GPUs at it. It's a cultural problem. Silicon Valley's defining ethos is “Build first, ask forgiveness later.” That might work fine for photo-sharing apps. Not so much for reprogramming the species…
Altman thinks we’ll freak out for two weeks and then go back to brunch. That might be true...But it shouldn’t be.
If the smartest minds in the field are jumping ship and begging for change, and the people holding the reins are telling us, "Don’t worry, we’ll figure it out," we don’t need to panic. We need to organize.
AGI is not just a technology story. It’s a governance story. A democratic story. A human survival story.
And if the people driving the bus say they can’t see the road, humanity needs to take back ownership of the wheel.
…before we end up just another ant colony, quietly paved over in pursuit of progress.
MORE FREE ARTICLES HERE