Hype and Reality: 3 Things AI Made Me Think About This Week
The past week handed me three very different AI moments — an inspiring graduation speech, a joke about ChatGPT’s relentless praise, and a disturbingly realistic deepfake. They came from completely unrelated contexts, yet they all pointed in the same direction. We tend to talk about AI in extremes: either as a miracle that will solve everything or as a threat that will replace us. But these three experiences reminded me of something more grounded and far more urgent: we must use AI with appreciation and with caution. We should welcome the knowledge, the access, and the possibilities it offers — while refusing to take any of its outputs, intentions, or implications for granted.
1. The Myth of AI as the Great Equaliser
At my graduation ceremony, the Dean spoke with great confidence about AI’s “equalising power.” His message was uplifting: AI — especially ChatGPT — gives everyone the same access to the same information. The poorest and the richest have access to the same ChatGPT on their devices. And because information is power, he argued, AI is beginning to level the differences between people who were never equal before.
It’s a beautiful idea and deeply misleading.
True, everyone who already has a device, stable internet, and the digital literacy to use it can open ChatGPT. But this is far from including society as a whole. The poorest do not have this access. And at the other end of the spectrum, the very richest — the corporations that fund, train, and control these systems — are not “equal” to anyone. They are the ones deciding what information the rest of us get to see.
The fact is: “Information is power” — and this power is held by a tiny group of companies that decide what AI learns, what it repeats, and what it excludes.
When we talk about AI as an equaliser, we conveniently ignore the gatekeepers — the companies that design the algorithms, and ultimately control the worldview ChatGPT projects back at us.
So what does AI equalise? Perhaps the gap between the lower-middle and upper-middle classes of society, who all have smartphones, Wi-Fi, and the luxury of time to “work with AI.”
But it deepens that gap between the poor and everyone else. And it widens the precipice between the ultra-rich elite and everybody else.
The equaliser narrative sounds inspiring when spoken from a podium. But in the real world of inequality, limited access, and corporate control, it simply doesn’t hold.
2. When AI Becomes a Mirror That Only Compliments
Another thought struck me this week when I noticed the growing number of jokes about ChatGPT and feedback. They all say the same thing: if you want an honest critique, ask a trusted human; if you want praise, ask ChatGPT.
And they are right. ChatGPT will tell you that your draft is insightful and your idea fascinating— even when it isn’t. For people who need reassurance, this might feel comforting. For those needing actual improvement, it’s an obstacle.
So I asked ChatGPT: How do I get real constructive feedback instead of endless praise? Its answer was honest: “I do default to sounding overly complimentary. It’s a polite bias in my communication style… If you want critique, you have to give me a prompt to switch into that mode explicitly.”
The recommended prompt was simple:
“Give me constructive critique only — no praise and no rewriting.
Focus on clarity, flow, argument strength, tone, originality, and audience interest.
List specific improvement points.”
I tried it. The flattery disappeared, and truly useful points of improvement were raised.
The point remains: AI is only honest when we force it to be. Its default mode flatters, avoids conflict, and avoids the discomfort of telling users something they might not want to hear. It mirrors a culture of constant reassurance rather than learning.
3. Perfect Deepfakes: Our Scepticism Must Evolve
The third shock of my week came from visual AI. I saw a clip of lions fighting in the Sahara that looked like a National Geographic documentary. It wasn’t. Then I watched a video of a girl talking about her day, showing her face, her skin, her little imperfections. She was charming and very human. Her final line was: “And where do I live? I live on a server in Iowa.”
We’ve known for years that AI can make fake things; this is not the news. But now it makes them indistinguishable from reality — or worse, more real than reality itself.
Platforms like Instagram still label some of this content with a tiny “AI info” badge. But not every platform, not every creator will disclose it. And even fewer viewers will even think to check.
The responsibility is on us. We need a new kind of digital scepticism. We must look for small irregularities, too-perfect lighting, generic storytelling, signs that give away a machine trying too hard to be human.
The safest rule, according to ChatGPT, is: “If something evokes a strong emotion too quickly, if it feels visually flawless, or strangely generic, pause — assume it may be synthetic until proven otherwise.”
Conclusion
These three moments — the equaliser myth, the flattering feedback, and the perfect deepfakes — underline the same truth: AI affects what we know, how we see ourselves, and what we accept as real. That’s exactly why we must use it with deliberate, critical awareness.
To understand what the future with AI might look like, Elon Musk often recommends Iain M. Banks’ Culture series, describing societies run by superintelligent machines — a future where human power is optional.
I recommend Kazuo Ishiguro’s Klara and the Sun for the opposite perspective: a world where AI doesn’t need real emotions to provoke ours, raising the uncomfortable moral question of what ethical obligations arise when humans perceive AI as if it feels?”
Together, these books frame the choices we face now: not just how advanced AI becomes, but what kind of humans we become in response to it.

