AI’s social presence could shape human relationships, thinking

(Photo by Igor Omilaev on Unsplash)

HONG KONG [TAC] – As artificial intelligence (AI) systems become increasingly integrated into daily life — from virtual assistants and therapy bots to customer service avatars — a critical question emerges: How do these interactions with AI affect the human brain and our capacity for social connection?

Professor Benjamin Becker, from the Department of Psychology at The University of Hong Kong (HKU), explores this question, highlighting how AI is beginning to function as a new type of social presence that could reshape not only human relationships but also the neural systems that underpin them.

Published in the journal Neuron, the paper argues that human brains are fundamentally wired for social interaction. Through evolution and lived experience, humans develop specialized neural circuits that help us understand others, build trust, and form social bonds. These same circuits, Becker notes, can be activated when we interact with AI systems that mimic social cues — a phenomenon known as anthropomorphism.

“Humans naturally treat AI avatars and chatbots as social agents,” said Becker. “We project feelings, intentions and personalities onto them, often unconsciously. And this tendency is becoming more pronounced as AI grows increasingly lifelike and personalized.”

Becker’s framework outlines both the opportunities and risks of AI’s growing social role.

Opportunities he cited include reducing loneliness as AI can provide emotional comfort and mental health support, and in enhancing learning and therapy.

Among the risks he cited include altering human sociality as AI could weaken interpersonal skill and emotional responsiveness, reinforce bias and misinformation and exploit social vulnerabilities to maximize user engagement or consumption thus raising ethical concerns.

“Understanding how our social brain shapes interactions with AI — and how AI interactions reshape our social brains — will be key to ensuring these technologies serve human wellbeing rather than undermine it,” Becker emphasized.

The paper calls for interdisciplinary collaboration between neuroscientists, AI developers, ethicists, and policymakers to anticipate and responsibly guide this shift. Becker’s research invites society to engage in a critical conversation about how we design and deploy AI systems that increasingly interact with our most human instincts — the need to connect, trust, and belong.