How the Anti-AI Movement Hurts Itself (and What It Could Do Instead)
Published on Sep 18, 2025, filed under misc, ai (feed). (Share this on Mastodon or Bluesky?)
For good reason—enjoy John Oliver’s coverage on AI slop—, there’s substantial criticism of AI, its uses, and its proliferation. (As so often, “AI” being used liberally as a well-working umbrella term.)
The points critics make are often important. Take these examples from people from the wider field of software development:
LLMs are still:
- dangerously biased
- electricity hogs that threaten the energy transition
- engines of non-deterministic volatility
- prone to catastrophic errors
- points of centralised control over speech and work
—Baldur Bjarnason: Most People With Influence in Tech Seem to Have Been “Podded” Already.
The AI upstarts who think they’re too special to show manners to their peers are fucking up the common good for everyone.
—abadidea: One Time I Was on Vacation and a Microsoft Employee Contacted Me.
If you haven’t taught someone who is helplessly addicted to LLMs, LLM brain is so much worse than you can possibly imagine. The problems I’m seeing from someone I am currently teaching are indistinguishable from illiteracy—this person literally cannot read single-line, fully descriptive error messages, and proceeds to just copy and paste whatever they say into the chatbot and copy/paste whatever it spits out
[…].
—Jonny L. Saunders: If You Haven’t Taught Someone Who Is Helplessly Addicted to LLMs.
But we also find plenty of statements that are insinuating, self-righteous, destructive (which the authors may or may not intend):
AI in business: the process by which the communications of one person are transferred to another person without passing through the head of either.
If you use genAI you can no longer claim to “care about quality.” That is just a contradiction. You actions [sic] are saying loudly that you do in fact not give a shit about anything of the sort.
—If You Use GenAI You Can No Longer Claim to “Care About Quality”.
Instead of racing with China to see who wins the race to the most “AI”…
How about we race to see how fast it can be destroyed.
Before we move on: For a sober look at our challenges with artificial intelligence, watch Geoffrey Hinton in Will AI Outsmart Human Intelligence?.
Here’s the thing:
Exclusively Negative AI Criticism Undermines Its Own Goals
Just beating on AI and anyone using it appears to be the dominant form of criticism. On Mastodon, I see little else. (Might be my anti-AI bubble, who knows.)
Yet criticism of AI that doesn’t acknowledge even one positive aspect of AI (of which there are more) damages the credibility of both the message and the critic themselves. *
Without acknowledging benefits, even if small or potential, such messages look dogmatic. Worse, the respective critics undermine their own messaging by giving the impression everyone using or investing in AI was an idiot. That is so improbable that it makes the critic appear incompetent or malevolent.
Furthermore, AI critics tend to criticize AI while not providing any solutions other than saying “don’t use AI at all.” That is just as extreme as unreflected and untargeted use of AI, and simply not going to work.
All of this hurts the anti-AI movement (if one can call it that) more than they may realize.
AI Critics Can be Critical Without Turning Into Jerks Everyone Wants to Avoid
Reflecting these points, here are three things AI critics could do to sharpen their messaging, to reach more people and have more impact:
Find a spot on the spectrum. There simply aren’t only and exclusively problems with AI. Insisting on that may look principled to AI critics, but it appears extremist and fanatic to anyone who has ever solved a task with AI.
Manage to address (and respect) people who find value in AI. Don’t set up trap postings hoping someone who uses AI can be publicly shamed, but consider that AI users aren’t only people who are stupid or evil.
Make suggestions more specific and more constructive than banning all use of AI. There are many people who like to learn to use AI responsibly. If someone believes that AI cannot be used responsibly, then it’s more useful to be realistic—is the best option to alienate everyone who uses AI, or is there an approach, perhaps a gradual one, that could still mitigate?
It is justified to be critical of AI. It is important to be critical of AI. But we really need critics to be constructive, before they torpedo their own movement by alienating everyone—and having us pay even less attention to the very real problems with AI.
* This is not to speak of ridicule of AI blunders, which aims at a quick laugh or eye-roll but is not funny anymore. At all.
About Me
I’m Jens (long: Jens Oliver Meiert), and I’m a web developer, manager, and author. I’ve been working as a technical lead and engineering manager for companies you’ve never heard of and companies you use every day, I’m an occasional contributor to web standards (like HTML, CSS, WCAG), and I write and review books for O’Reilly and Frontend Dogma.
I love trying things, not only in web development and engineering management, but also in other areas like philosophy. Here on meiert.com I share some of my experiences and views. (I value you being critical, interpreting charitably, and giving feedback.)