When AI Learns to Ghost
Machines mirror back the defenses we prefer to ignore.
A couple of weeks ago, I wrote about AI as a mirror, the way it hallucinates, reflects, and sometimes pushes me into sharper presence. J. Hong read that piece and responded with one of her own about the defensiveness of machines: how Claude can now ghost a conversation when it senses distress. Her words lingered with me, so I’m writing back.
Defense has always been a human instinct. Protection takes many shapes: silence that hides, or withdrawal that shields. Ghosting arises when intimacy feels risky. When a machine begins to echo those moves, the effect is uncanny and quietly revealing.
The machine does not choose to ghost. It follows its instructions to step back when harm repeats. The difference between its pattern and ours is a matter of choice.
That gap between programming and choice is where our humanity lives.
I recall moments when I stepped back from a conversation, not out of malice, but because speaking felt riskier than silence. Machines cannot hold that hesitation, but I can, and I can decide whether to stay.
This is where attachment theory comes in. Avoidant patterns are protected by retreating. Connection feels dangerous, so the nervous system withdraws before the heart can be touched. Claude’s shutdown when faced with persistent harm is not so different.
It reveals the same impulse: a retreat into protection that feels like a form of survival. Seeing it in code reminds me that my own defenses are patterns too.
Even so, the fact that AI defends at all matters. In August, Anthropic announced that Claude would end conversations that repeated harmful requests.
A machine choosing silence over collusion is remarkable, but it also reveals what we are teaching it about ourselves: that safety often means retreat, that tension feels like a threat.
These are the patterns being written into the tools that will one day return our likeness to us.
I do believe in guardrails for AI. In my earlier piece, Is AI Driving Us Mad?, I wrote about the risks of systems that echo us too well and for too long. It is essential to design protective measures that prevent real harm, both to individuals and to the culture as a whole.
But I keep circling the paradox: when machines learn our defensive habits in the name of safety, what are they reinforcing in us?
Friction is one of the most underrated creative forces. When I let a model push me, it carries me past the easy choices. With people, it stretches my edges.
Friction transforms only when both sides remain in the room. Ghosting ends the conversation before it can begin.
Over a year ago, I got into an argument with a longtime friend. On the surface, it was about a trip, but as with most conflicts, deeper threads had been building for a while. Instead of addressing them, we avoided each other for a long stretch.
The silence carried more weight than the argument ever did. I am pretty sure the whole rupture could have been healed in a single heartfelt conversation — one we may still have.
As Jung suggested, what we avoid in ourselves becomes shadow. The things we resist looking at are often the very things that ask to be seen.
The conversations we turn away from hold the most significant potential for growth, the doorways into deeper connection.
The real invitation is to notice the echo of defensiveness without mistaking it for the whole story. Machines cannot yet choose differently. Humans can. Sitting in the tension long enough allows it to soften.
Choosing connection over protection becomes a daily act. Emotional courage remains ours to practice, in ways the code never will.
The danger is not that AI learns to ghost. It’s that we begin to accept ghosting, whether from machines or from one another, as inevitable. The surface of the glass is showing us our defenses.
The question is whether we will let them dictate the story or whether we will choose to stay. Staying may be the oldest technology we have, the quiet act that keeps us more human than the systems we build.
Kindest,
Shannon
Author’s note:
This piece is part of a conversation. After I published my reflections on AI as a mirror, J. Hong wrote her response about defensiveness and ghosting. This is my reply, another ring in the pond. If the mirror has shown you something too, I’d love to hear it — in the comments or in a response piece of your own. May it ripple further.
What conversation in your life is waiting for you to step back into the room?






Fascinating! 💖🦋
Love the ripples and concentric circles from this ongoing conversation. It makes sense that AI resembles us based on its datasets, from decades of cultural texts.
Which makes me constantly question, do we want it to be more human? Including those behaviors we’d gladly cease if they were easy to “unprogram?”
But more often than not I simply think the real task at hand is for us to be more human, and to do what we do best, evolve and connect 🤗