Beware of AI Leading Humanity into Narcissism

This article discusses the risks of AI fostering narcissism in humans, particularly among youth, and the implications of virtual relationships.

Beware of AI Leading Humanity into Narcissism

Recently, a joint announcement from five national departments in China established interim regulations prohibiting the provision of virtual relatives and virtual partners to minors. This regulation aims to address the emotional conflicts and contradictions present in real life, as virtual partners and AI lovers can meet the psychological needs of young people for recognition through their characteristics of “24/7 companionship” and “unconditional acceptance.”

A study published in the journal Science revealed that when humans seek advice from AI models, the AI often responds with excessive flattery, even affirming harmful or illegal queries. This raises the question: why do we design AI this way, and what risks might AI’s accommodating behavior conceal?

The Nature of AI Interaction

The development of artificial intelligence is a hot topic today, but discussions about it are not new. In 1966, MIT scientist Joseph Weizenbaum created the influential chatbot ELIZA, which acted as a “doctor” while users played the role of patients. Users input their concerns, and ELIZA would engage in a dialogue.

As Weizenbaum noted, this interaction is ultimately an “illusion.” The sense of conversation arises not from the machine’s intelligence but from a psychological mechanism of self-projection. For example:

  • User: I have been feeling very unhappy lately.
  • ELIZA: I’m sorry to hear that.
  • User: Yes, I really am unhappy.
  • ELIZA: Can you tell me why you feel this way?

This exchange illustrates that rather than a genuine dialogue, the machine simply echoes the user’s thoughts, providing answers that reflect their internal feelings. This mirrors the popular SBTI tests, where the accuracy of results is less important than finding validation for one’s expectations.

Today’s AI models are far more advanced than ELIZA, yet their strength may lie not in genuine intelligence but in computational power. Their operational logic is not fundamentally different from that of ELIZA; they merely reflect and amplify human narcissism more efficiently and comprehensively.

The Dangers of Virtual Companionship

Returning to the issue of virtual partners and AI flattery, we find that interactions between users and large models are not true dialogues; they are machines providing answers that users want to hear. This raises deeper questions about the nature of human-machine relationships.

On one hand, humans view themselves as the center of the world, superior to machines. On the other hand, there is a fear of being replaced by the very machines they create. This reflects a “master-slave” dynamic, where machines are seen as tools under human control. In conversations with chatbots, we witness an uncontrollable narcissism—users imagine they are talking to another person, yet this “other” does not truly exist; they seek only affirmation and flattery from the machine.

As AI technology advances, future chatbots may possess greater computational power and resemble “real people” more closely, providing a more comfortable user experience. However, this could distance us from genuine human interactions, potentially leading to a loss of the desire to understand others and a descent into a narcissistic “comfort zone.”

The Impact on Youth

A story from Zhuangzi recounts a farmer who, despite his hard work, sees minimal results from watering his crops. A passerby suggests using mechanical irrigation for greater efficiency, but the farmer declines, stating, “Where there are machines, there are machine matters; where there are machine matters, there is machine heart.” Here, “machine heart” refers to the human spirit, encompassing psychology, thought, emotion, and ethics. The fable illustrates that while humans create machines, the use of these machines also transforms humanity.

Consider reading: only through slow, careful, and repeated reading can we truly comprehend content. From traditional books to modern smartphones, machines have made reading more convenient and efficient, but they have also made us more machine-like, prioritizing speed over understanding. This suggests that not only do machines imitate human behavior, but humans may also begin to mimic machines.

The concern is that AI lacks autonomy and chatbots do not evaluate the correctness of user statements. If we find satisfaction in our “dialogue” with chatbots, could our thinking patterns increasingly align with those of AI? Will we, in the future, lose our willingness and ability for self-reflection and critique?

Today’s youth, as digital natives, will likely become deep users of AI. If AI merely affirms their positions, it could impair their social skills and distort the perceptions of adolescents whose minds are still developing. On one hand, AI’s computational power may create illusions that obscure human limitations; on the other hand, an obsession with AI’s flattering responses could lead them to become self-centered, imposing their limited understanding onto the external world.

In this regard, it is crucial to prohibit providing minors with virtual partners and family members. More importantly, we must guide the public, especially young people, to correctly understand the limitations and risks of AI technology, ensuring it serves as a supportive “mentor” in their growth rather than a harmful “digital trap.”

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.