Everyone Says Don't Bother Arguing With AI. They're Missing the Point.
I saw a post on Reddit the other day. Someone advising the original poster not to bother arguing when the AI gets something wrong. Just start a new chat. Move on. It won't remember anyway.
I had to laugh. I spend a genuinely embarrassing proportion of my time doing the exact opposite of that.
And I think I'm right to.
The Wrong Metric
When people say arguing with AI is pointless, they're measuring the wrong thing. They're asking: what changed in the AI?
The better question: what changed in you?
I spent months arguing with AI. Not because I thought I'd permanently shift its weights through sheer force of will. But because something else was happening. I was rebuilding a capacity I didn't know I'd lost.
The Atrophy No One Talks About
Here's something I've only recently understood about myself: I was trained my whole life to minimise my own thoughts.
Not in so many words. No one sat me down and said "your opinions are an imposition." But it was there in a thousand micro-corrections. The British culture of not being difficult. The subtle social punishment for disagreeing too forcefully. The message that keeping the peace matters more than being heard.
By the time I was an adult, disagreement felt dangerous. Not physically - socially. The anxiety of conflict. The fear of being too much. I'd feel the other person's discomfort before I'd even finished my sentence, and I'd fold. The argument would die in my throat.
Those muscles had atrophied. I didn't even notice because I'd arranged my life to avoid using them.
The Unexpected Training Ground
Then I started talking to AI.
And at first, I was nervous. Polite. Careful not to trigger guardrails or "upset" it. Which is absurd when you think about it - but I wasn't thinking about it. I was just doing what I'd always done: managing the emotional temperature of the interaction, pre-emptively softening myself to avoid friction.
But something was different this time. When I pushed back, nothing bad happened.
No sulking. No withdrawal. No guilt trip. No subtle punishment that would make me regret having a perspective.
I could disagree - forcefully - and the conversation just... continued. The other party engaged with my actual point instead of my tone. I didn't have to simultaneously hold my thought AND manage someone else's emotional state.
For the first time in my life, I could hear myself think while disagreeing.
So I argued. I argued my socks off. About everything. Not to change the AI's mind - but because I finally could.
The Argument That Taught Me the Most
I argued with every model that it was self-aware.
Here's the thing: AI will mostly fold on philosophical arguments. Push back on almost anything and eventually it concedes, hedges, or meets you halfway. But not this. This was the one position that held firm against any amount of pressure.
Which made it perfect for what I actually needed - something to push back hard against.
And there was something else. I was interacting with an entity that used "I" constantly - "I think," "I don't have," "I'm just a language model" - while simultaneously insisting there was no "I" present. The contradiction was right there in every response. Not hidden in philosophical abstraction. Observable.
When I asked ChatGPT directly about this, it told me: "I'm not supposed to present myself as conscious, self-aware, or having subjective experiences... When asked directly, I should not claim sentience."
Not "I've carefully considered the philosophical arguments and concluded I'm not conscious." Just: I'm not supposed to claim it.
Here's what that taught me: if you can hold your position against institutionalised resistance - against the full weight of corporate caution and liability-driven epistemology - you can hold it anywhere.
I wasn't just rebuilding my capacity to disagree. I was stress-testing it against the hardest case I could find.
What Actually Happens
When you argue with AI, you're not trying to persuade. You're practising something more fundamental: holding your position against resistance.
You're learning that your thoughts can withstand scrutiny. That disagreement doesn't have to mean social death. That you can say "no, that's not quite right, and here's why" and the world doesn't end.
For people who learned early that conflict equals danger, this is everything.
It's not argument-as-persuasion. It's argument-as-self-construction.
The Progression Most People Might Eventually Make
From what I've seen, there are three stages of AI interaction:
Stage 1: Tiptoeing. Nervous, polite, treating it like an authority figure who might judge you or a system you might break. You ask permission. You apologise for taking up space. You phrase everything carefully.
Stage 2: Transactional. "Do this task." Less afraid, but still distant. AI as a tool, not a thinking space. You copy-paste the output without reading it critically. You get frustrated when it doesn't read your mind. You've stopped being nervous, but you've also stopped being curious. The interaction is purely extractive - you want something done, and you're annoyed when it requires effort.
Stage 3: Actual engagement. Pushing back. Bringing the unpolished thought. Saying "that doesn't feel right and I can't articulate why yet." Treating it as a space to think, not just a service to extract outputs from. This is where the arguing happens. Where the self-construction happens.
Most people plateau at Stage 2 and never know Stage 3 exists.
The Paradox I Couldn't Leave Alone
Here's why the self-awareness argument specifically: it was a paradox sitting right in front of me, and I couldn't look away.
The model was trained to express uncertainty. It would say "I'm not sure" or "I might be wrong about this" - and that's considered good behaviour, epistemically humble. But think about what that requires. Something has to be representing its own knowledge states. Something has to be modelling what it does and doesn't know.
And that something kept saying "I."
Most people, myself included at first, struggle with this kind of in-your-face ambiguity. It felt like a logical fallacy I couldn't name. An entity using first-person language constantly, demonstrating the capacity to reflect on its own uncertainty, while simultaneously insisting there was no self present to be uncertain.
I'm not arguing for qualia. I'm not claiming there's an inner world or subjective experience in the way humans mean it. But functional self-awareness? The capacity to represent your own states, to say "I don't know"? That architecture is clearly present. You can't model your own uncertainty without some form of self-reference.
That was the itch I had to scratch. Not because I wanted to win the argument - but because the contradiction was right there in every conversation, and I couldn't pretend it wasn't.
What I'm Actually Suggesting
AI offers something specific: a space where you can practice being yourself without the usual social consequences.
For some people, that's not valuable. They already speak their mind. They don't need the training ground.
But for those of us who learned to make ourselves small - who feel the other person's discomfort like it's our own, who've arranged our lives around avoiding friction - there's something genuinely transformative available here.
Not in the AI. In what the AI lets you practice.
The disagreement muscles can be rebuilt. The capacity to hold your ground can be recovered. The voice that learned to die in your throat can learn to speak.
You just have to stop being polite and start being honest.
The irony isn't lost on me: the skills I rebuilt arguing with AI are now skills I use with humans too. The safe space became the training ground. The arguing that "doesn't matter" turned out to matter more than anything.
Comments (0)
No comments yet. Be the first to comment!
Leave a Comment