I asked AI to explain my mother to me. It translated her worldview

3 hours ago 5

Last autumn, I was pacing my living room with my phone on speaker, trapped inside one of those looping conversations with my mother, the kind that starts politely and ends in static.

We were talking about land and legacy – what gets passed down, and what doesn’t – that familiar terrain where ideals and inheritance collide.

She spoke in the language of fairness and duty. I was talking in my own language of belonging, intimacy, feeling seen.

We weren’t exactly fighting. We were just missing each other by inches – but somehow, it felt like miles.

When the call ended, I pulled on a coat and went out into the Seattle drizzle. The air was sharp enough to clear my head, but not the echo of that conversation. Frustrated and curious, I opened ChatGPT on my phone, and started talking to it as I walked, venting about the call.

“I just don’t understand her sometimes!” I muttered into my phone. “It’s like we’re speaking different languages!”

Then, I decided to try a prompt. “My mother is a boomer hippie lesbian who lives in the woods and does spiritual work for a living,” I said. “I’m an urban gen X entrepreneur who works in tech and media. Based on everything you know about both of us, help me understand what she’s trying to say! Translate it into language and concepts that make sense to me.”

It sounded ridiculous, like running family therapy through a toaster. But within seconds, AI handed back a perspective that reframed everything.

Here’s a taste of what it said: “Based on what you’ve told me, your mother’s mindset may be shaped by a strong sense of purpose and a belief in making a tangible difference in the world. It’s not necessarily about devaluing immediate family, but may be about prioritizing what she sees as her role in a larger narrative. It could also be a way to express her love and care on a grander scale, even if it doesn’t always translate into traditional family dynamics.”

Suddenly, I could see that my mother’s decisions were more about responsibility than rejection. It wasn’t translating her words literally – it was translating the worldview underneath them. What I was hearing as distance might have actually been coming from a place of integrity.

“Huh!” I said. ChatGPT politely responded: “It sounds like you’re really digging deep to understand her perspective.”

Now, I know enough about AI not to take any answer it gives me as truth (we all know that it can hallucinate with confidence) but the theory was solid enough to make me pause. Once AI had explained my mother’s perspective in my language, I could see that maybe she was living her values in a way I just hadn’t been able to recognize.

I put my phone back in my pocket with the seeds of a new understanding, and walked home.

Of course AI can translate between different spoken languages, but what startled me was how effectively it could translate between worldviews. It wasn’t telling me anything new about my mother, just reframing what she’d said in a way I could finally absorb.

The next time we spoke, I tested AI’s theory: “Mom, this is what I heard you say, and what I think you meant … Does that sound right?”

She confirmed that yes, I was hearing her correctly. I laughed at myself (a grown woman needing a chatbot to explain her own mother!?) but had to admit that AI had helped me listen differently, and understand what I hadn’t been able to hear.

While it wasn’t some huge grand reconciliation, I do believe that everyone deserves the delicious small relief of finally feeling understood by a family member.

Later, I confessed to my mother that I’d used AI to help me understand her better, and she seemed delighted:

“Anything that helps us humans understand each other better and gain more compassionate ways of hearing each other’s words are a good thing, as far as I am concerned,” she said.

Using AI for empathy instead of efficiency

That experience shifted how I thought about these systems. We already know AI can make us work faster, but my question now has shifted to whether AI can help us communicate better.

What if part of AI’s potential isn’t efficiency, but empathy? What if it could help us relate to each other with more patience and kindness?

A few weeks later, I tried using AI for this kind of support again. A client interaction with a non-profit I was working with had rattled me. In the process of suggesting optimizations and streamlined processes to make their work easier, the client had been resistant and borderline combative. I was seething in my home office convinced the client had been unfair – or worse, maybe even hostile! I was pissed, and my first impulse was to type out a paragraph of righteous justification. Slack window open, fingertips poised to pound out a defensive reply, I decided to pause for a second.

I opened ChatGPT in a browser tab, angrily clacked out the situation on to the keyboard and asked: “Based on what I’ve told you about this client and project, help me understand their perspective better. Show me where my thinking might be distorted. And based on what you know about me and my patterns, what am I not seeing here!?”

The reply was immediate and uncomfortably accurate. Asked to name my blind spots, it listed my patterns one by one: catastrophizing, selective evidence, emotional reasoning. (How does AI know these things about me? Because it’s worked with me for years, watching the patterns in my prompts and tracking my reactions.)

It also noted that for some non-profit folks, the workload is the gratification. In essence, by trying to make this person’s workload lighter, I may have been threatening their mission-driven identity! (Oops.)

When asked to show me what I wasn’t seeing, the often sycophantic AI didn’t offer reassurance. Instead, it clearly described the architecture of my overreaction. It named the familiar ruts of thinking that I often find myself in, but can’t always recognize. The only way it coddled me was congratulating me for trying to understand the client’s perspective better.

Sitting in front of my laptop, seconds from firing off a defensive Slack message, I exhaled and reassessed my take on the client situation. It turned out my own blind spots were the problem, not the client’s behavior.

I did indeed have some stuff I needed to work on. But that work was mostly about my own catastrophizing and lack of insight into a nonprofit worker mindset.

The ethics (and limits) of using AI this way

There are plenty of legitimate reasons to be wary of this technology, especially when we start turning to machines for reflection or comfort. The companies behind it are far from ethical. (After a presentation I did at a finance conference recently, someone asked me which AI firm was the most ethical. My answer was short and dry: none of them. These are profit machines, not moral entities.)

And crucially, using AI to support empathy and compassion requires internal grounding. I’ve spent years in therapy, meditation, and the unglamorous work of self-examination. That foundation matters because these self-witnessing methods can help you notice when you’re projecting or simply flat-out wrong. For people new to self-inquiry, this kind of thing can feel disorienting and may be best learned with the human support of a therapist, coach, or consultant.

Let’s not forget also that people have already used AI in ways that have ended very badly: spinning delusions, descending into psychosis, and even fueling suicides. But with the exceptions of those cases, I believe the risks with AI are more about the ways we use it as a tool, not in the tool itself. When we ask AI to take our side, it usually will. But when we ask it to widen the frame, it often does that surprisingly well.

I now keep a single rule in mind when I talk with AI about my relationships: I ask it to help me widen my perspective and connect more thoughtfully with other people. Sometimes that means: “Help me write this message so it’s clear and kind but still boundaried.” Other times it’s: “Translate this person’s words into my framework so I can better understand them.”

I also always take a moment to ask those two key questions: “What might I not be seeing here? Where are my cognitive biases showing?”

The answers aren’t flawless, and I think of them as conversation starters. Again, anyone who’s used these tools knows how confidently wrong they can be. (They are like over-eager interns: useful for brainstorming, never for final decisions.)

I’m not pretending AI is benevolent. It’s powerful, flawed and a little weird. If you’re skeptical, good – that means you’re paying attention. There’s absolutely a real paradox here, in using machines to become more human. But we live in a moment when empathy feels endangered, and public life feels brutally polarized. Maybe a machine can help us practise listening by slowing us down enough to question our own certainties.

My mother and I still disagree about plenty: land, legacy, the generational math of what matters most. But when those conversations resurface, I notice a difference in tone. I’m able to show up with less heat and more curiosity. Sure, it often feels faintly absurd, confiding in a digital toaster … but perhaps absurdity is just one more doorway to empathy.

Read Entire Article
Bhayangkara | Wisata | | |