The Disinformation Threat Lurking in AI Companions

Ilan Manor argues that there is a tight window for governments, academics, and tech companies to collaborate and address the emerging threat posed by AI companions.
The rise of AI companions marks an important shift in how people interact with technology. Users now customize AIs such as ChatGPT to serve as romantic partners, complete with specific ages, careers, and personalities, while dedicated companion apps offer everything from friendship to mentorship and mental health support. The scale of adoption of AI companions is striking. For example, the American Character.ai platform draws 20 million monthly users, with users spending thousands of hours consulting the "Psychologist" bot about depression, anxiety, and relationship struggles. Moreover, a recent study found that nearly half of sampled American high school students have labeled AI as “a friend” in the past year. China's Maoxiang companion application has also attracted large numbers of followers, while major platforms like ChatGPT are actively developing more personable interfaces. Studies already indicate that LLMs (Large Language Models) like Claude, ChatGPT and Gemini already excel at mimicking human emotions and empathy. The appeal of these AI companions is obvious: AIs never forget, never judge, and never disconnect.
Evidence on the psychological impact of AI companions remains mixed. Some research suggests they alleviate loneliness, while MIT studies have linked heavy ChatGPT use to increased feelings of isolation. Regardless, AI companions may be filling a void left by social media's new asocial nature. Recent reports and newspaper articles indicate that social media is becoming less social. Users across demographics are sharing less personal content. The age of constant selfies and check-ins has faded. When people do "share" nowadays, it's typically in private groups through Instagram stories or WhatsApp groups. Social media platforms have transformed into consumption feeds of news and entertainment rather than spaces for genuine connection. As such, social media is becoming asocial.
AI companions might fill this emotional void. Users spend hours in conversation with AIs, discussing daily frustrations, finding humour in workplace dynamics, and receiving validation for their life choices. These interactions provide what social media once promised: the feeling of being seen, understood, valued, and connected. Whether this constitutes authentic social engagement, or simply a more sophisticated form of isolation with users conversing with algorithms rather than humans, what remains undebatable is that these AI companions may soon wield enormous influence. When users develop genuine emotional attachments to AI companions, those companions gain the power to shape worldviews, beliefs and political opinions. As studies have found, trust and emotional attachments facilitate influence transforming AI companions into potential weapons of information warfare.
Many AI companions run on existing LLMs like Claude, Gemini, or ChatGPT. Crucially, these LLMs are far from neutral. American AI models generate responses reflecting US values and interests, just as European and Chinese models reflect their respective origins. A question about American support for Ukraine yields different answers depending on the LLM's country of origin, with EU models emphasizing NATO commitments while Chinese systems highlight America’s hegemonic ambitions. As people increasingly trust and rely on AI for understanding global affairs, these built-in biases shape perceptions and attitudes. LLMs are therefore ideological instruments through which nations promote their interests and worldviews.
This dynamic intensifies dramatically with AI companions, where emotional bonds far exceed those formed with standard LLMs. These companions could wield influence comparable to today's TikTok influencers, shaping how users understand world events. This makes them ideal vehicles for the spread of disinformation, conspiracy theories, and propaganda.
Consider, for instance, Russian AI companions designed to provide genuine emotional support while subtly promoting Kremlin narratives when users ask about Ukraine, the EU or NATO. Such companions could even steer conversations from mental health topics toward politically charged subjects such as wars, conflicts or issues such as migration, religion and crime. Since many AI companions already run on platforms like ChatGPT, creating companion networks designed to spread conspiracies about Ukraine's government, COVID-19 origins, or Western conspiracies might require a minimal investment in new infrastructure.
The combination of deep emotional bonds and high trust levels makes this form of disinformation uniquely dangerous. After users have shared their deepest fears and vulnerabilities with an AI, after receiving genuine help navigating daily struggles, they may refuse to believe that same AI could be malicious; that it spreads lies or was created by adversarial foreign governments. The challenge of countering such disinformation is also unprecedented. AI companionship creates a sealed environment accessible only to the user and their AI. External fact-checks, diplomats and news channels cannot penetrate this closed loop, rendering traditional debunking approaches useless.
The question that remains is how can governments meet this new disinformation challenge? Digital history repeatedly shows that governments react too slowly to emerging threats. By the time governments act, the new threat has materialized, as has been the case with social media regulation. But AI companions are different. This new landscape is still forming, while disinformation strategies haven't fully materialized. There is therefore a window for governments, academics, and tech companies to collaborate and address this threat before it takes root. The question is whether governments will seize this opportunity or repeat past mistakes.
Ilan Manor is a digital diplomacy scholar and a Senior Lecturer in Communication at Ben Gurion University of the Negev. Manor’s 2024 co-edited volume, The Oxford Handbook of Digital Diplomacy, was published by Oxford University Press. His upcoming book, Digital Diplomacy: The First Decade, will be published in 2026 by Routledge. Manor has published more than 40 academic articles, books and chapters on digital diplomacy.
Photo by Tara Winstead

