Category: Tech & Tools

  • AI is the wrong name

    AI is the wrong name

    A thought experiment on intelligence, speed, and the mistake humans keep making.

    The other evening I found myself watching a discussion hosted by Brian Cox asking a simple question:

    Can we harness AI for good?

    It sounded like the sort of question that should produce useful answers. Instead, as usual, it produced the familiar parade of concerns: superintelligence, job losses, creativity, existential risk, and whether machines might one day become sentient.

    All interesting. But somewhere in the middle of the conversation something much more intriguing surfaced. Neil Lawrence made a comment that quietly undermined the entire framing of the debate. And once I started following that line of thinking, it became difficult to look at “AI” in quite the same way again.

    The naming problem

    Let’s start with the obvious question. What exactly is Artificial Intelligence? Ask ten people and you’ll get ten answers. Ask a machine-learning researcher and you’ll hear about pattern recognition systems trained on enormous datasets. Ask the public and you’ll hear something closer to science fiction: machines that think, reason, perhaps even feel.

    The problem is that the name itself nudges us toward that second interpretation.

    “Artificial intelligence.”

    Two words that almost force us to compare machines with human minds. But what if that comparison is misleading from the start?

    A different way to look at it

    During the discussion, Lawrence described something that reframed the whole issue for me. Humans communicate at roughly 2,000 bits per minute when speaking. Machines can process and transmit information at something like 600 billion bits per minute. That isn’t a modest improvement. That’s the difference between walking pace and the speed of light.

    At that point the question stops being:

    Is AI intelligent?

    And becomes something else entirely:

    What happens when systems exist that process and propagate information millions of times faster than we do?

    Seen through that lens, “AI” starts to look less like intelligence and more like two other things entirely:

    • extreme computational capacity
    • extreme communication velocity

    And those two things alone are enough to reshape the world. No sentience required.

    The anthropomorphism trap

    Humans have a habit of projecting themselves onto things. We name our cars. We talk to dogs. We shout at computers when they misbehave. So when a system produces fluent language or convincing analysis, we instinctively assume there must be a mind behind it.

    But large language models don’t think in the human sense. They do something both simpler and stranger. They reconstruct patterns. Feed them enough data and they become astonishingly good at producing responses that resemble thought. The mistake is assuming that the appearance of intelligence is the same as intelligence itself. It isn’t. It’s more like a mirror. A very large, very fast mirror.

    The new phase: AI that acts

    Until recently that distinction didn’t matter much. Most AI systems could talk, recommend, or analyse. Interesting. Occasionally useful. Sometimes irritating.

    But something new is happening now. AI systems are beginning to act in the world rather than simply describe it. Agent-based systems are increasingly being designed to operate across software environments: reading emails, interacting with applications, executing tasks and coordinating workflows within defined boundaries. In other words, they do things.

    Related: If you want a more practical starting point, I’ve also written about how to use AI to learn simple coding and build small useful things.

    And once machines begin doing things inside human systems, the conversation changes entirely. Now the issue isn’t whether they understand the world. The issue is whether we are comfortable with systems operating at machine speed inside human institutions.

    Intelligence versus velocity

    We’ve been asking the wrong question.

    We ask:

    When will machines become intelligent?

    But perhaps the more important question is this:

    What happens when machines operate at speeds humans cannot meaningfully supervise?

    Because that moment has already arrived. Financial markets operate this way. Advertising systems operate this way. Recommendation algorithms operate this way. And increasingly, agent systems will operate this way too.

    The technology doesn’t need consciousness to reshape society. Speed alone is enough.

    The real risks

    Much of the public conversation about AI revolves around spectacular future scenarios. Superintelligence. Machine consciousness. Robot uprisings.

    Meanwhile the real risks are far less dramatic and far more immediate. Power concentration. Opaque systems. Decision-making without accountability.

    AI doesn’t need to become sentient to create serious problems. It only needs to become embedded in institutions where its decisions are difficult to interrogate or challenge. We’ve already seen early versions of this. Algorithmic systems determining financial outcomes. Automated moderation shaping public discourse. Opaque software affecting people’s livelihoods.

    In many cases the problem wasn’t intelligence. The problem was lack of transparency combined with enormous scale and speed.

    A strange thought experiment

    While thinking about all this I found myself imagining something unusual. Most visions of the future involve humans connecting themselves to machines. Brain-computer interfaces. Neural implants. Augmented cognition. But what if the direction of travel went the other way?

    What if, instead of humans attaching computers to their brains, computers attached biological substrates to themselves?

    Imagine data centres where each GPU cluster is paired with a small living neural system. Not consciousness exactly. But a biological interpretive layer capable of absorbing nuance and context in ways machines struggle with today.

    It sounds fanciful. But research into organoid intelligence and hybrid bio-digital systems already exists. Perhaps the first meaningful bridge between biological and machine cognition will not sit inside our heads. It may sit inside the machines.

    Back to the original question

    So can we harness AI for good? Probably. But that question assumes something slightly misleading. It assumes AI is a thing we must control. In reality it may be something else entirely. A new layer of infrastructure for processing and communicating information. Like printing. Like electricity. Like the internet. And history suggests that once such systems exist, they reshape society whether we fully understand them or not.

    Which brings us back to the real challenge. Not whether machines will become intelligent. But whether humans will remain wise enough to govern systems that operate far faster than we ever will.

    Watch the Brian Cox discussion that prompted this piece.

  • Learn to Code with AI: A Smarter, Simpler Way to Start

    Learn to Code with AI: A Smarter, Simpler Way to Start

    Intro:

    You don’t need to be a 22-year-old hoodie-wearing prodigy to learn to code. You don’t even need to know what a “for loop” is. What you do need is the right mindset — and now, thanks to generative AI, the right tool at your side.

    If you’ve ever been curious about learning to code but felt overwhelmed, bored, or out of your depth, this post is for you. AI makes coding more accessible than ever — and not just for techies.

    Learning to code can be intimidating but with AI tools, it’s easier than ever. In this short guide, you’ll learn how to start coding with AI and build real skills quickly.


    Why Learn to Code in the First Place?

    Before we dive into how AI helps, let’s answer the bigger question: why bother?

    • Automate boring stuff (e.g. sorting files, scraping data, building tools for your side hustle)
    • Communicate better with developers at work or on freelance projects
    • Think logically and solve problems even if you never write production-grade code

    For me, it’s about unlocking freedom. Code is power — and not the intimidating kind.


    How AI Can Teach You to Code (Without Driving You Mad)

    1. AI Doesn’t Judge — It Coaches

    Tools like ChatGPT, GitHub Copilot, and Replit’s Ghostwriter let you:

    • Ask “dumb” questions without embarrassment
    • Get instant explanations, suggestions, and corrections
    • See examples in context, not in a vacuum

    You’re not stuck Googling vague syntax errors at 11pm anymore. You’re having a conversation.


    2. You Learn by Doing — Not Memorising

    Instead of slogging through hours of video tutorials or “Hello World” exercises, you can:

    • Ask AI to explain what code does in plain English
    • Build small, useful things straight away
    • Iterate fast with feedback

    Here’s a real example I tried:

    “Help me build a Python script that renames all files in a folder to lowercase.”

    Not only did I get the code, I got step-by-step guidance — and I understood it.


    3. AI Helps You Focus on Why, Not Just How

    Traditional coding tutorials throw jargon at you. AI lets you reverse that:

    • Start with what you want to build
    • Ask AI how to do it
    • Learn by refining your request

    You’re the creative director — AI’s the pair of hands.


    My Recommended Tools for Beginners

    • Gemini (Free or Pro) and/or ChatGPT (Free or Plus)
      Best for guidance, explanations, and beginner-friendly answers.
    • Replit (with Ghostwriter)
      Browser-based IDE that lets you run code, edit live, and get AI suggestions as you go.
    • Google Colab
      Great for Python experiments with built-in AI integration.
    • Glitch
      Ideal for simple web apps (HTML/CSS/JavaScript) with live previewing.

    What to Try First: 3 No-Fuss Mini Projects

    1. Build a calculator in Python
      Ask AI to help you build a basic CLI tool — then expand it to handle decimals or more operations.
    2. Create a webpage with HTML & CSS
      Ask ChatGPT to generate a simple portfolio layout, and tweak colours or sections with help.
    3. Automate renaming or sorting files
      Use a script to clean up your downloads folder — it’s nerdy but addictive.

    Final Thoughts: Learning to Code Isn’t What It Used to Be

    With AI, learning to code is less about grinding through textbooks and more like having a personal mentor who’s endlessly patient and always available. It’s not magic, but it feels a little magical the first time you solve a problem with your own code.

    You don’t have to become a full-time developer. But understanding the basics — and knowing how to work alongside AI — could be one of the smartest skills you add to your toolkit this year.


    What do you want to build?

    Drop me a note or leave a comment — what coding goal do you have in mind? I might write a follow-up showing exactly how I’d tackle it using AI.


    Frequently Asked Questions

    Can AI really help me learn to code?

    Absolutely. Tools like ChatGPT can walk you through code line-by-line, suggest fixes, and help you understand programming concepts in plain English. It’s like having a patient tutor on demand.


    What’s the best AI coding tool for beginners?

    ChatGPT is great for asking questions and writing small code snippets. Replit is ideal for running code live in your browser. GitHub Copilot works well inside VS Code, but may be better suited to users with a little more experience.


    Do I need to know any coding to use AI tools?

    Not really. You can start with zero knowledge. Many AI tools are designed to explain concepts and build your understanding as you go. You just need curiosity and a bit of patience.


    Is learning to code still worth it with AI around?

    Yes! If anything, it’s more valuable now. AI helps you learn faster and build more, but understanding code gives you superpowers in this new world. It’s like learning to think in a new language.