A thought experiment on intelligence, speed, and the mistake humans keep making.
The other evening I found myself watching a discussion hosted by Brian Cox asking a simple question:
Can we harness AI for good?
It sounded like the sort of question that should produce useful answers. Instead, as usual, it produced the familiar parade of concerns: superintelligence, job losses, creativity, existential risk, and whether machines might one day become sentient.
All interesting. But somewhere in the middle of the conversation something much more intriguing surfaced. Neil Lawrence made a comment that quietly undermined the entire framing of the debate. And once I started following that line of thinking, it became difficult to look at “AI” in quite the same way again.
The naming problem
Let’s start with the obvious question. What exactly is Artificial Intelligence? Ask ten people and you’ll get ten answers. Ask a machine-learning researcher and you’ll hear about pattern recognition systems trained on enormous datasets. Ask the public and you’ll hear something closer to science fiction: machines that think, reason, perhaps even feel.
The problem is that the name itself nudges us toward that second interpretation.
“Artificial intelligence.”
Two words that almost force us to compare machines with human minds. But what if that comparison is misleading from the start?
A different way to look at it
During the discussion, Lawrence described something that reframed the whole issue for me. Humans communicate at roughly 2,000 bits per minute when speaking. Machines can process and transmit information at something like 600 billion bits per minute. That isn’t a modest improvement. That’s the difference between walking pace and the speed of light.
At that point the question stops being:
Is AI intelligent?
And becomes something else entirely:
What happens when systems exist that process and propagate information millions of times faster than we do?
Seen through that lens, “AI” starts to look less like intelligence and more like two other things entirely:
- extreme computational capacity
- extreme communication velocity
And those two things alone are enough to reshape the world. No sentience required.
The anthropomorphism trap
Humans have a habit of projecting themselves onto things. We name our cars. We talk to dogs. We shout at computers when they misbehave. So when a system produces fluent language or convincing analysis, we instinctively assume there must be a mind behind it.
But large language models don’t think in the human sense. They do something both simpler and stranger. They reconstruct patterns. Feed them enough data and they become astonishingly good at producing responses that resemble thought. The mistake is assuming that the appearance of intelligence is the same as intelligence itself. It isn’t. It’s more like a mirror. A very large, very fast mirror.
The new phase: AI that acts
Until recently that distinction didn’t matter much. Most AI systems could talk, recommend, or analyse. Interesting. Occasionally useful. Sometimes irritating.
But something new is happening now. AI systems are beginning to act in the world rather than simply describe it. Agent-based systems are increasingly being designed to operate across software environments: reading emails, interacting with applications, executing tasks and coordinating workflows within defined boundaries. In other words, they do things.
Related: If you want a more practical starting point, I’ve also written about how to use AI to learn simple coding and build small useful things.
And once machines begin doing things inside human systems, the conversation changes entirely. Now the issue isn’t whether they understand the world. The issue is whether we are comfortable with systems operating at machine speed inside human institutions.
Intelligence versus velocity
We’ve been asking the wrong question.
We ask:
When will machines become intelligent?
But perhaps the more important question is this:
What happens when machines operate at speeds humans cannot meaningfully supervise?
Because that moment has already arrived. Financial markets operate this way. Advertising systems operate this way. Recommendation algorithms operate this way. And increasingly, agent systems will operate this way too.
The technology doesn’t need consciousness to reshape society. Speed alone is enough.
The real risks
Much of the public conversation about AI revolves around spectacular future scenarios. Superintelligence. Machine consciousness. Robot uprisings.
Meanwhile the real risks are far less dramatic and far more immediate. Power concentration. Opaque systems. Decision-making without accountability.
AI doesn’t need to become sentient to create serious problems. It only needs to become embedded in institutions where its decisions are difficult to interrogate or challenge. We’ve already seen early versions of this. Algorithmic systems determining financial outcomes. Automated moderation shaping public discourse. Opaque software affecting people’s livelihoods.
In many cases the problem wasn’t intelligence. The problem was lack of transparency combined with enormous scale and speed.
A strange thought experiment
While thinking about all this I found myself imagining something unusual. Most visions of the future involve humans connecting themselves to machines. Brain-computer interfaces. Neural implants. Augmented cognition. But what if the direction of travel went the other way?
What if, instead of humans attaching computers to their brains, computers attached biological substrates to themselves?
Imagine data centres where each GPU cluster is paired with a small living neural system. Not consciousness exactly. But a biological interpretive layer capable of absorbing nuance and context in ways machines struggle with today.
It sounds fanciful. But research into organoid intelligence and hybrid bio-digital systems already exists. Perhaps the first meaningful bridge between biological and machine cognition will not sit inside our heads. It may sit inside the machines.
Back to the original question
So can we harness AI for good? Probably. But that question assumes something slightly misleading. It assumes AI is a thing we must control. In reality it may be something else entirely. A new layer of infrastructure for processing and communicating information. Like printing. Like electricity. Like the internet. And history suggests that once such systems exist, they reshape society whether we fully understand them or not.
Which brings us back to the real challenge. Not whether machines will become intelligent. But whether humans will remain wise enough to govern systems that operate far faster than we ever will.
Watch the Brian Cox discussion that prompted this piece.

Leave a Reply