Have you ever watched two perfectly functional adults argue with courtroom-level intensity about something utterly insignificant—say, the structural integrity of a stuffed Aloo Paratha (potato pancake) —and thought, “Yes. This species is absolutely ready to handle the future”?
I have. And ever since that moment, I’ve slept remarkably well whenever Artificial Intelligence makes headlines.
Because the real crisis isn’t that silicon might suddenly sprout an ego and enslave us. The real crisis is that humanity—still running on an operating system primarily designed for arguing, tribalism, and drawing arbitrary lines—has now been handed a thermonuclear calculating tool.
AI isn’t the boogeyman. It’s the world’s fastest photocopier for our time-tested, glorious stupidity.
The Potato Epiphany
My philosophical awakening did not occur on a mountain top, in an ashram, or during a silent retreat. It happened at a sweaty little dhaba somewhere between Prayagraj and Varanasi, while I was patiently awaiting what I naïvely assumed would be a simple, life-sustaining aloo paratha.
Instead, I was treated to a gladiatorial debate.
Two men—otherwise peaceful, law-abiding citizens—were locked in mortal combat over the optimal ratio of spicy potato stuffing to outer dough. There were hand gestures. There was raised voice modulation. There was an unmistakable sense that reputations, if not civilizations, were at stake.
And in that moment, clarity descended.
If humanity can argue about potatoes with this level of white-hot passion, then our future was never going to be calm, rational, or particularly well-organised. It was always destined to be a long-running sitcom of magnificent, self-inflicted chaos.
Which brings us, neatly and inevitably, to Artificial Intelligence.
AI: The Newest, Shiniest Anxiety
AI is the modern boogeyman—freshly imported from Silicon Valley and tucked snugly between “global pandemics” and “that time I accidentally hit Reply All on a company-wide email.”
The headlines are a marvel of emotional whiplash.
On Monday, AI will cure cancer, solve world hunger, and usher in an age of eternal productivity and guilt-free cocktails.
By Wednesday, it will enslave humanity, steal your job, manipulate elections, and trap us in a bleak underground server farm where we pedal bicycles to recharge its batteries.
There is, naturally, no middle ground.
But here’s a quieter, more uncomfortable question we don’t like asking:
Is AI really our existential nemesis?
Or is it simply the most sophisticated megaphone ever invented for amplifying our species’ already legendary capacity for blundering?
A Brief History of Human Screw-Ups (Pre-AI Edition)
Before ChatGPT, before algorithms, before anyone worried about rogue neural networks, we were already chaos virtuosos.
We invented fire—then immediately realised we could burn everything down.
We invented money—then started complex wars over which pieces of decorated paper were more meaningful.
We drew lines in the sand (and called them borders)—then decided those lines were worth bleeding over.
Notice something important?
None of this required Artificial Intelligence.
All it took was a generous dollop of overconfidence, a sprinkle of ignorance, and that charming, eternally misplaced optimism that this time we’ve absolutely figured it out.
Enter AI: Our Obedient Silicon Intern
Artificial Intelligence, despite popular fears, does not want to rule the world.
It does not crave power. It does not yearn for social media validation. It is not awake at 3 a.m. questioning the meaninglessness of existence.
AI does exactly what we tell it to do. And that is the genuinely terrifying part. Because humans are breathtakingly creative when it comes to issuing spectacularly terrible instructions.
We didn’t create a rebellious overlord. We created a hyper-efficient intern who follows orders flawlessly—without context, conscience, or common sense—while we argue about potatoes.
AI, But Make It Indian
To understand why AI is less Terminator and more amplified desi chaos, we don’t need abstract thought experiments. We only need to look around.
Take Indian elections. We already perfected misinformation long before GPUs entered the chat. Now, with AI, we have hyper-realistic deepfake speeches, cloned voices of politicians who suddenly appear to endorse the exact opposite of what they said yesterday, and WhatsApp forwards that arrive with the confidence of the Constitution itself.
Earlier, misinformation travelled by cycle. Now it arrives by bullet train—complete with emojis, background music, and an auntie assuring you it’s “100% verified”.
Or consider education. AI tools can personalise learning, democratise access, and help students from small towns compete globally. Naturally, our first large-scale use case has been… automating homework, essays, and competitive exam prep—followed immediately by outrage that “students are no longer learning anything.”
We handed calculators to a system obsessed with rank lists, then acted shocked when everyone used them to optimise marks instead of curiosity.
And then there’s customer service. AI chatbots could reduce friction, improve access, and save time. Instead, they have been trained to apologise profusely while doing absolutely nothing—digitally recreating the most authentic Indian bureaucratic experience imaginable.
In short, AI in India hasn’t created new problems. It has simply given our existing ones better bandwidth.
The Hammer Theory of Doom
Think of AI as a hammer. A hammer is morally neutral. You can use it to build a beautiful, sturdy home. Or you can use it to (regrettably) destroy a priceless heirloom—or your own thumb. The hammer remains serenely unconcerned either way.
AI does not introduce new stupidity into the world. It simply industrialises the stupidity we already have.
Bias becomes automated. Greed becomes optimised. Misinformation becomes scalable.
What was once a rumour shouted by a man on a soapbox is now a deepfaked, algorithm-tuned outrage campaign deployed at planetary scale.
Lies have always travelled faster than facts. AI just gave them a rocket engine.
The Real Danger Zone
The real danger isn’t AI waking up and deciding to be evil. The real danger is the same one we’ve always lived with:
High intelligence + high power + astonishingly low wisdom.
We’ve now handed a thermonuclear calculator to the same species that can’t agree on potato-to-dough ratios.
The apocalypse scenario isn’t Skynet becoming self-aware.
It’s billions of people using extremely intelligent tools to execute profoundly stupid ideas—faster, cheaper, and at scale.
So… What’s the Fix?
Before we panic, ban, or blindly worship AI, it’s helpful to remember one uncomfortable truth: technology doesn’t change human nature. It merely removes the speed limits. Which means the fix was never going to be sexy.
Do we fear AI? Worship it? Ban it like a bad spice? Sadly, no. The solution is far more boring—and far more difficult.
It requires upgrading not the machines, but the humans. And yes, our collective operating system currently resembles a very buggy version of Windows 95.
We need:
- Better Education – So people can spot a manipulated image before forwarding it to fifty contacts.
- Better Ethics – Guardrails that don’t rely on the global equivalent of a pinky swear.
- Better Governance – Policies that keep pace with technology instead of jogging behind it, wheezing.
- Better Humility – Accepting that sometimes our gut feeling is wrong—and sometimes a metric shouldn’t replace a conscience.
The Mirror We Don’t Like
Artificial Intelligence is not the enemy knocking at the door. It is a brutally honest mirror—one that reflects our impatience, our shortcuts, our love for outrage, and our allergy to nuance. And mirrors are unsettling, especially when they show a species that can build Mars rovers but still believes every well-formatted screenshot deserves immediate trust.
AI hasn’t made us foolish. It has simply exposed how thin the layer of wisdom was to begin with. If we survive this age, it won’t be because machines suddenly developed ethics. It will be because we finally decided that wisdom, restraint, and critical thinking deserve as much investment as speed and scale.
Until then, buckle up. The future is indisputably intelligent. Whether we choose to be—even marginally—wiser remains an open question.
Sign-Off
I remain cautiously optimistic, mildly amused, and permanently hungry—both for ideas and for well-made food.
If humanity does manage to survive the age of Artificial Intelligence, I suspect it won’t be because of grand manifestos or perfect algorithms, but because somewhere, at a roadside dhaba, someone paused mid-argument, took a bite of a good aloo paratha, and briefly remembered that perspective matters.
Until the next journey—of roads, cultures, or inconvenient thoughts—stay curious, stay sceptical, and never trust anything that claims to be 100% verified without a second look.
Artificial Intelligence is not the enemy knocking at the door. It is a mirror. And mirrors are deeply unsettling—especially when they reflect a species that, centuries after inventing the written word, still struggles with critical thinking and impulse control.
If we survive this age, it won’t be because machines developed a fondness for humanity. It will be because we finally decided to become wiser than the tools we build.
Until then, buckle up. The future is undeniably intelligent. Let us fervently hope we manage to be as well.

That’s a sharp—and sobering—insight. We often search for a monster in the machine, only to overlook the reflection staring back at us. The real source of friction isn’t the technology itself, but the widening asymmetry between our explosive digital evolution and our glacial biological one. We are running 21st-century, near-godlike tools on 50,000-year-old tribal hardware. The solution isn’t necessarily smarter silicon; it’s more self-aware users. As the old saying reminds us: a fool with a tool remains a fool—the tool just makes them more dangerous.
LikeLiked by 1 person
Thanks, Nilanjana. Technology amplifies human intent but lacks wisdom, often spreading biases and fears faster. True progress requires enhancing self-awareness and ethics, not just smarter tools.
LikeLike
I have to say, your piece on the Silicon Paratha Principle is outstanding. The way you’ve blended humor, everyday anecdotes, and deep philosophical reflection is truly impressive. Starting from something as simple as an aloo paratha debate and expanding it into a sharp commentary on human nature and our relationship with AI shows incredible creativity. I especially loved how you framed AI not as a villain but as a mirror to our own flaws, and how you tied it back to Indian contexts like elections, education, and customer service. The writing is witty, insightful, and relatable, yet it carries a serious message about wisdom, ethics, and responsibility. Honestly, it reads like a magazine column or a thought‑leadership essay — entertaining, thought‑provoking, and memorable all at once. Hats off to you for capturing such a complex idea in such a unique and engaging way! 👏✨
LikeLiked by 1 person
Thank you so much for this generous and thoughtful feedback. I’m especially glad the idea of AI as a mirror—rather than a villain—came through clearly. Your encouragement makes the writing journey deeply rewarding. Grateful for readers like you who engage so perceptively. 🙏
LikeLike
Indeed. As the saying goes, it’s not guns that kill people, but people who kill people. But, of course, it’s easier for people to kill other people by using those guns.
AI has no wish to enslave us or take all of our jobs because it’s not sentient. But to go back to the guns and people analogy, AI technology has made it easy for some of our occupations to be hijacked. There are AI programs on the market designed to write books; they’re marketed as tools that ‘help you write a book in hours, not months’. They can now manipulate images and photographs almost undetectably – it won’t be long before they really will be undetectable, no matter how carefully we scrutinise them.
None of this is good news for those of us who are writers, artists, makers.
AI has many tremendously good uses, used carefully, but there is so much potential for harm. It can develop new medicines, improve food production efficiency, but also produce new, deadly efficient, weapons.
So yes, I do fear what AI will do, the same as I fear those guns.
LikeLiked by 1 person
I do understand the fear you’re describing, and I don’t think it’s irrational at all. For those of us who write, make, or create, AI can feel less like a tool and more like an encroachment—something that blurs the line between craft and automation.
You’re right that AI lowers the barrier. It makes imitation easy and scale effortless, and that inevitably threatens parts of how we earn and express ourselves. That’s unsettling, and pretending otherwise doesn’t help anyone. History suggests that every transformative tool reshapes creative work rather than extinguishing it. Photography didn’t kill painting; it changed what painting meant. Word processors didn’t kill writers; they altered the craft. AI may commoditise some outputs, but it still struggles with intent, lived experience, moral judgment, and originality rooted in human context. Those things matter more than ever.
Where I fully agree with you is on governance and responsibility. The same technology that can accelerate drug discovery or improve food security can also be weaponised—economically, politically, or militarily. That dual-use reality means the real question is not whether we fear AI, but who controls it, who benefits from it, and who bears the cost.
I share your concern about misuse. So yes, caution is not technophobia; it’s prudence. Fear, when articulated thoughtfully as you’ve done, is not panic—it’s a call for restraint, ethics, and accountability. AI should remain a tool that amplifies human capability, not one that erodes human dignity. The moment we stop insisting on that distinction is when the analogy with guns becomes uncomfortably exact.
The challenge is to insist—loudly—on ethical use, transparency, and respect for human creativity, rather than assuming the story ends with technology overpowering us. It never really has. The responsibility sits with us, not the machine.
Thanks, Mick, for your feedback.
LikeLiked by 1 person