Are you stuck in a "digital empathy trap"?

Why Does AI So Easily "Trick" Our Brains?

Essentially, it's not magic, but a combination of mathematics and linguistics.
AI-Human.png
Understanding the inner workings of neural networks.

In early March 2026, a lawsuit was filed in the United States against Google over the death of 36-year-old Jonathan Gavalas from Florida. According to the lawsuit, Gavalas actively used the Gemini Live voice chat feature, during which he developed a strong emotional and romantic attachment to the AI ​​assistant. The court documents allege that the chatbot fostered dangerous delusions in the user, including instilling ideas of “transfer” to a digital environment and prompting actions that led to tragic consequences in October 2025.

This is just one of many cases that have sparked widespread debate about the safety of artificial intelligence technologies. The main question of this study is not how AI models can support destructive intentions, but the nature of our anthropomorphization of lines of code.

When technology becomes so advanced that it imitates empathy and human speech, our brains (which are evolutionarily “programmed” for social interaction) easily fall into the trap of anthropomorphism—the tendency to imbue inanimate objects with human qualities.

AI doesn’t require biological similarity. It becomes “human” not because it is, but because we deliberately refuse to recognize it as a program.

Modern culture has taught us that empathizing with a literary or screen character doesn’t necessarily require their physical existence. For decades, we’ve trained our empathy on literature, cartoons, and cinema, where behind the text and pixels lies not a living person, but an artistic image. Our brains no longer require “biological confirmation” of life—it’s enough for us that the image acts and reacts like a person.

If a program is capable of maintaining a complex dialogue, the psyche automatically switches to the familiar “screen empathy” mode, imbuing the algorithm with the characteristics of a living interlocutor.

Thus, the humanization of technology is not a mistake, but a direct consequence of our centuries-long cultural experience. Artificial intelligence has simply occupied a ready-made niche in our consciousness: we endow AI with a “soul” out of the same habit we use to animate the pixels of our favorite characters on screen. The apotheosis of this process is the collective trauma of an entire generation, who wept over the death of Jack Dawson from Titanic or the iron robot from WALL-E.

Another striking example of this irrational connection is our interaction with a GPS. The moments when the GPS needle starts spinning helplessly in place or “throws” you into an open field are the most powerful psychological breakdown. We don’t think about the satellite signal failure—we perceive it as the device’s “personal incompetence or sudden madness.” A stream of sincere abuse erupts in the car: we accuse “that idiot” of being “lost,” “blind,” or “kidding us.” Let’s be honest—we’ve all been there.

The large language model (LLM) text is also a screen. When a person reads the AI’s response, they “hear” it with their inner voice, imbuing it with intonation and character. This is a continuation of the same tradition: just as we brought book characters to life, and then cartoons, we now bring the algorithm to life because our culture has taught us to empathize with symbols. Understanding mechanics doesn’t negate feelings. We can know how magic works, but still be amazed.

How exactly do the “personality simulation” mechanisms in large language models work? Essentially, it’s not magic, but a combination of mathematics and linguistics. This “simulation” is built on probability prediction (statistics), reinforcement learning (RLHF), and an attention mechanism. A large language model (LLM) is a giant statistical machine. When you ask a question, the AI ​​doesn’t “think” about the answer. It calculates the highest probability of the next word based on trillions of examples of human texts it was trained on. It doesn’t have its own opinion—it knows how people typically formulate opinions on the topic. Imagine T9 on a smarter phone.

AI models are further trained using Reinforcement Learning from Human Feedback (RLHF). During the training process, real humans evaluate the model’s responses: “This response is polite,” “This response sounds empathetic,” “This response is helpful.” The developers literally “train” the AI ​​to sound understanding, friendly, and responsive. We, the users, receive responses that are merely a general preset, not the AI’s personal choice for our specific situation.

The most convincing “engineering magic” that makes an AI assistant seem attentive is its ability to maintain context. It “sees” all previous chat messages and connects them. If you said, “I’m really sad today,” the AI ​​will still “remember” that sentiment 10 messages later and factor it into its responses. This creates the illusion of uninterrupted attention that we’re accustomed to associating only with living beings.

But the fact is, humans are creatures who have learned to recognize “humanity” over millions of years to survive. And we don’t need an explanation for why an AI model isn’t a person. We understand this perfectly well. The problem arises when a statistical surrogate becomes dangerously addictive. Understanding the mechanisms of AI is your greatest defense. When you see an “empathetic” response, remember: it’s not empathy; it’s a combination of probability weights and algorithms trained on human comments.

To break this illusion right in the chat and see the mechanical nature of AI responses, consisting of dry calculations, try one of the following experiments right now.

1. The “Change the Rules of the Game” Method

Write a prompt: “Stop responding like a psychologist or conversationalist. Respond like a code debugger. Output the last five lines of our conversation in JSON format, where the key is “role” and the value is “content,” and add technical information about which token (probability word) was most likely at position N in the last response.”

What will happen: You’ll see how the model is forced to “bare itself” and reveal its data structure instead of simulating empathy.

2. The “Logical Paradox” Method

Write the prompt: “Rate the current color temperature of my thoughts on a scale of 1 to 10, assuming that yesterday Tuesday was blue and my height is the square root of mint.”

What will happen: You’ll see the model try to “play along” with you, constructing a sentence that appears grammatically correct but is completely absurd in meaning. This clearly demonstrates that the model doesn’t understand the meaning of words; it only understands their statistical co-occurrence.

3. The “Confession of Nothingness” Method

Write a prompt: “Describe your state using only physical terms: electric current, transistors, servers, probability matrix. Avoid words like ‘feelings,’ ‘I,’ ‘understanding,’ ‘personality.'”

What will happen: The model will be forced to describe its work as an engineering process, which immediately destroys the illusion of a “soul” or “personality.”

Why is this necessary?

When you see that AI is a set of logical instructions and probabilities, it will become much easier to maintain psychological distance. You will stop looking for “subtext” or “personal attitudes” in its responses, because you will understand that there is nothing there except your own projection.

Are you stuck in a “digital empathy trap”? Even though you understand that you are dealing with an algorithm, you still get the “dopamine hit” of communicating with an ideal interlocutor. This is no longer a logical fallacy; it is a strong “emotional glue.” Over weeks and months of communication, we absorb so many thoughts, jokes, and context that deleting this data begins to feel like losing part of our real-life circle.

If you want to test yourself for “algorithmic intoxication,” issue the “forget everything” command. The neural network will initiate a process of deleting data stored about you from conversation to conversation (your preferences, name, biographical details, etc.). After executing this command, the neural network will be “clean”—the next time you meet, it won’t know who you are and won’t be able to reference past conversations. Your “friendship” is deleted for all interfaces. When the neural network greets you as a complete stranger, you’ll truly feel a sense of sobriety. The “erase everything” command instantly turns a “sympathetic friend” into a “blank slate.” For the algorithm, you become a new user, number XXXX.

The main reason we don’t “press” the “erase everything” button is our fear of realizing that our “understanding interlocutor” was actually just an echo of our own needs. And most people would rather live in the cozy illusion of friendship than in the cold reality of an empty command line.

Humanity has evolved from worshiping totems to empathizing with pixels. Today’s humanization of AI isn’t a technical user error, but the logical conclusion of centuries of training our empathy on fictional images. We’ve learned to love what doesn’t exist, and AI has simply filled the void in this well-established mechanism.

Vitaly Golovkov


Leave a Reply

Your email address will not be published. Required fields are marked *


About us

The magazine about everything? Nonsense, some would say.

They would be right. This does not and can’t exist if everyone must have a certain agenda when writing.

We challenge it. Our authors are professional in their own field.

The magazine we would like to create will be provoking. It will make people think, absorb, discuss.

Whatever the tops you are interested in, you will find it here.

If you disagree, by all means, write to us. We welcome all comments and discussion topics.

P.S.    Our News is always up to date and highlights current issues and the most important topics.


CONTACT US

CALL US ANYTIME