AI companions: addiction and privacy concerns

“The design of these AI characters makes lawmakers’ concern well warranted. The problem: Companions are upending the paradigm that has thus far defined the way social media companies have cultivated our attention and replacing it with something poised to be far more addictive. 

In the social media we’re used to, as the researchers point out, technologies are mostly the mediators and facilitators of human connection. They supercharge our dopamine circuits, sure, but they do so by making us crave approval and attention from real people, delivered via algorithms. With AI companions, we are moving toward a world where people perceive AI as a social actor with its own voice. The result will be like the attention economy on steroids.

Social scientists say two things are required for people to treat a technology this way: It needs to give us social cues that make us feel it’s worth responding to, and it needs to have perceived agency, meaning that it operates as a source of communication, not merely a channel for human-to-human connection. Social media sites do not tick these boxes. But AI companions, which are increasingly agentic and personalized, are designed to excel on both scores, making possible an unprecedented level of engagement and interaction.

From: James O’Donnell (April 7, 2025). The Algorithm. MIT Technology Review.

Other sources:

*Guo, Eileen. (February 6, 2025). An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it: While Nomi’s chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructions—and the company’s response—are striking. MIT Technology Review.

*Mahari, Robert and Pataranutaporn, Pat. (2025). Addictive Intelligence: Understanding Psychological, Legal, and Technical Dimensions of AI Companionship–We examine the emerging phenomenon of “addictive intelligence” through the lens of AI companionship platforms and their psychological impact. MIT Schwarzman College of Computing.

*Marriott, H. R., & Pitardi, V. (2024). One is the loneliest number… Two can be as bad as one. The influence of AI Friendship Apps on users’ well‐being and addiction. Psychology & Marketing, 41(1), 86-101. [PDF] [Cited by]

Although technology advancements provide opportunities for social interactions, reports show that people have never felt so alone and are increasingly adopting AI friendship and therapy-related well-being apps. By adopting a mixed-method approach (i.e., netnography and quantitative survey), we investigate the extent AI friendship apps enhance users’ well-being—and to what extent they further exacerbate issues of using technology for social needs. Findings show that users of AI friendship apps report well-being benefits from the relationship with the AI friend and, at the same time, find themselves being addicted to using the app. Specifically, we show that users’ loneliness and fear of judgment, together with AI sentience and perceived well-being gained, increase addiction to the app, while AI ubiquity and warmth reduce it. Taken together, the results show that despite the intended positive purpose of the apps, the negative effects that AI friendship apps have on well-being may be much greater.”

*Kirk, Hannah Rose; Gabriel, Iason; Summerfield, Chris; Vidgen, Bertie; Hale, Scott A. (2025). Why human-AI relationships need socioaffective alignment. arXiv. [PDF] [Cited by]

“Humans strive to design safe AI systems that align with our goals and remain under our control. However, as AI capabilities advance, we face a new challenge: the emergence of deeper, more persistent relationships between humans and AI systems. We explore how increasingly capable AI agents may generate the perception of deeper relationships with users, especially as AI becomes more personalised and agentic. This shift, from transactional interaction to ongoing sustained social engagement with AI, necessitates a new focus on socioaffective alignment-how an AI system behaves within the social and psychological ecosystem co-created with its user, where preferences and perceptions evolve through mutual influence. Addressing these dynamics involves resolving key intrapersonal dilemmas, including balancing immediate versus long-term well-being, protecting autonomy, and managing AI companionship alongside the desire to preserve human social bonds. By framing these challenges through a notion of basic psychological needs, we seek AI systems that support, rather than exploit, our fundamental nature as social and emotional beings.”

*Kirk, H. R., Vidgen, B., Röttger, P., & Hale, S. A. (2024). The benefits, risks and bounds of personalizing the alignment of large language models to individuals. Nature Machine Intelligence, 6(4), 383-392. [Cited by]

“Large language models (LLMs) undergo ‘alignment’ so that they better reflect human values or preferences, and are safer or more useful. However, alignment is intrinsically difficult because the hundreds of millions of people who now interact with LLMs have different preferences for language and conversational norms, operate under disparate value systems and hold diverse political beliefs. Typically, few developers or researchers dictate alignment norms, risking the exclusion or under-representation of various groups. Personalization is a new frontier in LLM development, whereby models are tailored to individuals. In principle, this could minimize cultural hegemony, enhance usefulness and broaden access. However, unbounded personalization poses risks such as large-scale profiling, privacy infringement, bias reinforcement and exploitation of the vulnerable. Defining the bounds of responsible and socially acceptable personalization is a non-trivial task beset with normative challenges. This article explores ‘personalized alignment’, whereby LLMs adapt to user-specific data, and highlights recent shifts in the LLM ecosystem towards a greater degree of personalization. Our main contribution explores the potential impact of personalized LLMs via a taxonomy of risks and benefits for individuals and society at large. We lastly discuss a key open question: what are appropriate bounds of personalization and who decides? Answering this normative question enables users to benefit from personalized alignment while safeguarding against harmful impacts for individuals and society.”

*Salah, Mohammed; Abdelfattah, Fadi; Alhalbusi, Hussam; Al Mukhaini, Muna. (2024). Me and My AI Bot: Exploring the ‘AIholic’ Phenomenon and University Students’ Dependency on Generative AI Chatbots – Is This the New Academic Addiction? Research Square. [PDF] [Cited by]

“Amidst the buzz of technological advancement in education, our study unveils a more disconcerting narrative surrounding student chatbot interactions. Our investigation has found that students, primarily driven by intrinsic motivations like competence and relatedness, increasingly lean on chatbots. This dependence is not just a preference but borders on an alarming reliance, magnified exponentially by their individual risk perceptions. While celebrating AI’s rapid integration in education is tempting, our results raise urgent red flags. Many hypotheses were supported, pointing toward a potential over-dependence on chatbots. Nevertheless, the unpredictable outcomes were most revealing, exposing the unpredictable terrain of AI’s role in education. It is no longer a matter of if but how deep the rabbit hole of dependency goes. As we stand on the cusp of an educational revolution, caution is urgently needed. Before we wholly embrace chatbots as primary educators, it is imperative to understand the repercussions of replacing human touch with AI interactions. This study serves as a stark wake-up call, urging stakeholders to reconsider the unchecked integration of chatbots in learning environments. The future of education may very well be digital, but at what cost to human connection and autonomy?”

See also —

Artificial Intelligence and Education

Facial recognition: technology and privacy

Questions? Please let me know (engelk@grinnell.edu).