Back when I wrote The AI Wheelchair: From Crutch to Cage, I said I was worried we were heading toward a future where people stop using AI like a calculator and start using it like a replacement nervous system. Apparently we skipped the “future” part entirely and just drove straight into it headfirst while insisting this was all perfectly normal behavior.
Recently I ran across two articles that honestly felt less like tech journalism and more like previews of a very bleak future where people slowly forget how to interact with other humans without first consulting the machine.
One article involved someone using ChatGPT to help them ask “better” questions during conversations. Another involved using Claude for relationship advice with prompts involving conflict resolution, emotional analysis, and communication strategies.
And the thing is, these are almost perfect real-world examples of the exact slippery slope I talked about in my earlier post.
The conversation coaching article especially stood out to me because one of the examples involved the AI suggesting questions like:
“What’s something you’ve changed your mind about recently?”
Another example from the article had ChatGPT helping generate follow-up questions to keep conversations flowing naturally because the user admitted they struggled with keeping discussions moving.
Now on the surface, sure, harmless enough. It probably does help conversations flow better.
But the problem is not the specific question.
The problem is the dependency model being built underneath it.
Human conversation is not supposed to be optimized like a SQL query.
People develop social instincts the same way they develop any other skill. Practice. Awkwardness. Mistakes. Learning timing. Reading body language wrong. Recovering from saying something dumb. You know, the normal human firmware update cycle evolution gave us.
If AI becomes the intermediary layer for all social interaction, people stop developing those instincts naturally.
And yes, I know the immediate counterargument already:
“Well Xodice, people ask friends for social advice too.”
Sure.
People also used calculators in math class. The problem starts when nobody remembers how to do arithmetic without reaching for the calculator first.
That was the entire point of my original “AI wheelchair” post.
A wheelchair is an incredible tool for somebody who needs it.
But if perfectly capable people voluntarily stop walking because the wheelchair is easier, eventually the legs weaken.
The exact same thing can happen cognitively and socially.
The relationship advice article worried me even more because some of the prompts were things like:
“How do I tell if I’m overreacting?”
and:
“What’s the best way to bring this issue up without starting a fight?”
The article also described using Claude to analyze arguments, evaluate emotional reactions, and help frame difficult relationship discussions in the “best” possible way.
Again, these seem harmless individually.
Honestly, some of the advice was probably decent.
That is almost what makes it more concerning.
Because once the AI gives emotionally reassuring and statistically safe responses a few times, it becomes incredibly easy to start trusting it more than your own judgment.
That is the transition point where the tool starts becoming something else entirely.
You stop using AI to assist your thinking and start using it to validate your thinking.
Then eventually you stop trusting yourself to navigate emotionally difficult situations without machine assistance.
That is the cage part.
And relationships are already chaotic enough without introducing a probability engine trained on half the internet’s emotional dysfunction directly into the process.
Relationships require nuance.
Context matters.
History matters.
Tone matters.
Sometimes people need advice. Sometimes they need perspective. Sometimes they just need to go outside, touch grass, and stop drafting relationship negotiations with what is essentially a very advanced autocomplete engine running in a datacenter somewhere.
What worries me most is not that people occasionally ask AI for advice.
Humans have always looked for guidance.
What worries me is watching people slowly outsource core human skills because the machine feels safer, easier, and more predictable than learning those skills organically.
Social interaction becomes optimized.
Emotional processing becomes optimized.
Conflict resolution becomes optimized.
And eventually people begin distrusting their own instincts because the machine sounds more confident than they do.
That is not augmentation anymore.
That is dependency.
And the really darkly funny part is this is exactly how most bad infrastructure decisions start too.
Every sysadmin reading this knows the pattern.
Some automation gets added to help with one small thing.
Then more responsibility gets handed to it because it works.
Then eventually nobody remembers how the underlying system functions anymore because the automation became the system.
Until one day it breaks catastrophically at 3 AM and everybody realizes the only guy who understood the original process retired six years ago to raise goats somewhere in Montana.
We are starting to do the same thing socially.
Only this time the underlying system being forgotten is human interaction itself.
Maybe I really am becoming the aging UNIX greybeard yelling at the cloud infrastructure again.
But I still believe AI works best when it enhances human capability rather than replacing parts of human development entirely.
Use it like a calculator.
Use it like a research assistant.
Use it to untangle bizarre technical documentation written by somebody who clearly escaped from a 1997 enterprise Java conference.
But the moment people start treating AI like an emotional co-processor for everyday human interaction, I think we are stepping onto a very slippery slope that ends with people losing confidence in their own ability to simply be human without machine mediation.
And honestly?
That worries me a hell of a lot more than AI replacing jobs ever did.