AI Isn’t Magic Beans - But It’s Not a Bullshitter Either
What Caitlin Moran gets right - and what we still need to learn
Caitlin Moran’s recent column about AI is a welcome burst of clarity. It’s funny, grounded, and - like much of her writing - it works because it doesn’t try to sound clever. It just is.
She paints AI as the Emperor’s New Wunderkind - hailed as a prodigy by its handlers, paraded through headlines, but visibly fumbling the basics while everyone claps politely and pretends not to notice. Fake books. Glue pizza. Broken buttons coded by digital apprentices who don’t know they’re broken. She gives AI a good kicking, and it’s hard not to cheer her on. The spectacle deserves some heckling.
And she’s not wrong. Not entirely.
There’s a growing divide in how people are experiencing AI - and we don’t talk about it enough. On one side, you’ve got tech-bros and politicians declaring it the engine of the future, already revving. On the other, people like Moran are giving it a cautious prod, watching it spit out nonsense, and wondering how this became the main act.
The temptation is to see AI as either messiah or menace, depending on how many hallucinated book titles or software bugs you’ve encountered this week.
But maybe we’re missing something more obvious: AI hasn’t come from nowhere. It’s been trained on us - on our writing, our behaviours, our contradictions. So it’s not surprising that, like us, it’s flawed.
What we’re building isn’t some alien intelligence; it’s a reflection. If the reflection stutters, invents, misleads or flatters, it’s echoing patterns we’ve already laid down in the culture.
Every time it gets something wrong, we should pause - not just to laugh at it, but to recognise how much of that mistake was inherited.
That doesn’t excuse the failures. It just frames them.
AI can’t be better than us until we’re clearer about what “better” even means. And that starts with honesty - not just about what the technology is, but about what we’ve taught it to be.
Most people haven’t had any kind of orientation. They’ve been handed something complex and probabilistic, and told - often implicitly - that it’s ready to use. But they haven’t been shown how to prompt it. They haven’t been told how to check what it says. They don’t know when it’s bluffing, when it’s biased, or when it’s drawing from a source that may not even exist.
Of course people are frustrated. I would be too. You wouldn’t expect someone to fly a jet just because they’ve sat in an aisle seat. And yet we expect people to get meaningful results from a system trained on terabytes of human expression, with no instruction and no context.
But something interesting happens when you do learn how to use it.
Slowly, awkwardly, and not without friction, the tool starts to make itself useful.
Teachers are using it to differentiate materials in overburdened classrooms. Freelancers are using it to write faster, with more versions, more options, more room to play. Small business owners are drafting emails, creating social posts, and mapping customer journeys they’d never have had time for. Coders are using it to clean up tedious boilerplate and explore unfamiliar languages.
Translators, accessibility workers, journalists, designers, researchers - it’s not changing everything. But it is helping. Quietly, behind the headlines.
And the more time people spend with it, the more it starts to feel less like an oracle and more like a tool. Not an all-knowing machine. Just a better kind of autocomplete. One that you can direct, interrupt, improve.
A tool that says, “Here’s what I think you mean - tell me if I’m wrong.”
So who’s right - Moran or the evangelists?
Both, and neither.
Moran is right to call out the absurdities. But wrong to suggest the tool itself is a write-off.
The evangelists are right that AI is powerful. But wrong to pretend it’s ready for unsupervised use at scale.
The truth sits between the hype and the hopelessness.
AI won’t save us - and it won’t doom us. Because it’s not separate from us. It was trained on us. And it shows. Every hallucination, every flash of brilliance, every dull mistake - it’s all ours, reflected back.
What that means is: if AI is flawed, it’s not because it’s broken. It’s because it’s mirroring a world that already is. And that gives us a choice: do we accept the reflection as it is?
Or do we get better - so the tools we build can be better too?
We don’t need to believe in AI.
We need to engage with it critically.
And maybe that’s the real shift: not from human to machine, but from expectation to understanding.
Less “magic beans”.
More tools we learn to wield.
Carefully. Responsibly.
And together.
If you liked this piece, then you may enjoy some more articles where I explore the liminal boundaries between humanity and its technology:
Why AI Is Not “Stealing” Creativity: A Historical and Educational Perspective on Homage, Learning, and Innovation I lent my Substack to Perplexity to respond.
The Story AI Can’t Tell - Why the future of filmmaking won’t be revolutionised by machines—no matter how fast they get.