Healing Beyond Survival: AI, Mental Health, and War

By Robert Schuett and Susanne Schuett -
Healing Beyond Survival: AI, Mental Health, and War

In war, and its aftermath, mental survival is as urgent as physical survival. Robert Schuett and Susanne Schuett argue AI can help heal and build resilience—if it is grounded in human dignity and the primacy of psychosocial care.

“Access to services – mental health in catastrophes and emergencies” is this year’s World Mental Health Day theme. The choice could not be more timely. Wars in Ukraine and beyond are displacing millions, shattering communities, and leaving invisible wounds that will outlast the fighting. Mental health is no longer a side issue in humanitarian crises, it is the frontline of human suffering, but also of human resilience.

But how can societies provide mental health care when traditional systems are overwhelmed, infrastructure is broken, and trust itself has been eroded? This question is more urgent than ever. Even in wealthy nations, mental healthcare services are stretched; in conflict and war zones, they are disrupted or nearly absent. The gap between need and capacity is growing into a chasm.

One response is coming from the frontier of technology. In April this year, Stanford University launched its AI for Mental Health (AI4MH) initiative, and set the tone by calling for AI not as a substitute for human care, but as a bridge to access. This is not an isolated development. The Stanford Institute for Human-Centered AI (HAI) continues to emphasise socially grounded AI in medicine. In parallel, the Tony Blair Institute for Global Change (TBI) has expanded its focus on AI in global governance, with 2025 reports on how digital tools can reinforce fragile health systems in low-resource and conflict settings. Together, these initiatives highlight a shift: AI is entering one of the most socially and psychologically sensitive (and politically contested) frontiers of our time: mental health in war and crisis.

The challenge, then, is scale. In conflict zones, where trauma multiplies faster than care systems can respond, the central question is whether AI can help bridge the gap between overwhelming need and limited human capacity—without losing sight of the social fabric on which all healing depends. Effectively addressing these needs requires tackling the social determinants that shape vulnerability across the life course. People exposed to unfavorable social circumstances are more likely to experience poor mental health, often as a result of structural factors that generate and perpetuate intergenerational disadvantage. The most pervasive social determinants include socioeconomic disadvantage, early-life and childhood adversity, migration, discrimination, inequalities, and social isolation. Confronting these challenges is, fundamentally, a matter of social justice.

AI could help meet this dual challenge of scale and social vulnerability by extending the reach of limited human resources and supporting timely, responsive care. In practice, this means scaling early detection, identifying urgent needs across large populations, and providing interim psychosocial support when human resources are overstretched. Yet its effectiveness is inseparable from the social systems in which it is deployed. Without attention to equityaccess, and community rebuilding, AI risks widening disparities rather than reducing them.

The situation in Ukraine illustrates both the urgency and the practical potential of AI for mental health in war. Since the beginning of the full-scale Russian invasion in 2022, Ukrainians have faced what professionals call individual and collective trauma, adding to a long history of national suffering. In response, the All-Ukrainian Mental Health Program “How Are U?”, initiated by First Lady Olena Zelenska, has sought to build a prevention-focused, human-centered system of mental health services even in wartime. Its vision is to cultivate resilience, make mental healthcare a daily habit, and treat mental health as equal in value to physical health. As the program emphasizes, “Let’s restore the person, and the person will restore everything.”

AI could strengthen this vision by extending the reach of mental health support, helping human professionals connect with those in need, and ensuring continuity of care even in times of disruption. Certain groups are particularly vulnerable and stand to benefit from targeted AI-enabled support. Displaced people could use digital platforms to map needs and connect to housing, employment, and psychosocial services, integrating mental health into the broader psycho-social recovery journey. For children, AI could support teachers and caregivers by flagging developmental or behavioral concerns early, guiding interventions toward safe, trauma-informed environments. Active-duty soldiers could use AI-assisted screening tools to detect early signs of PTSD or depression, enabling specialists to prioritise urgent cases and deploy resources efficiently. Returning veterans and their families could benefit from AI navigation systems that coordinate counseling, rehabilitation, social benefits, and employment support, while clinician-augmentation tools track symptom trends and reduce administrative burden.

These potential applications illustrate that AI can not—and must not—replace human care. Rather, it extends reach, reduces delays, and provides continuity in moments of disruption. AI must not be just a technical intervention. For it is a social intervention. Its effectiveness depends on embedding tools into trauma-informed, socially grounded systems that respect privacy, equity, and human dignity. In other words: AI must bridge the social gap, that is, supporting human care rather than replacing it.

In practice, AI’s role is to route, flag, and assist. It is not licensed to treat in lieu of human professionals—especially with traumatised populations. It can support triage and crisis detection, surface high-risk messages, and alert human responders within minutes. It can also help with population-level screening, analysing language signals at scale with consent and robust privacy safeguards. Multimodal tools can augment clinicians, combining speech, text, and video data to track symptoms over time and reduce administrative burden. What AI cannot do is replace human empathy, explain war to a child, or heal grief. Its interventions must always escalate high-risk cases to trained professionals. AI can play these three complementary roles, but always as a partner to humans, never a replacement.

The deployment of AI must adhere to core principles: prioritize triage over therapy, make consent and privacy non-negotiable, keep humans in the loop, deploy chat-based tools narrowly and safely, ensure secure and interoperable records, design for low-resource contexts, and build ethical governance into operational practice. Digital tools must respect equity, dignity, and social context, reinforcing rather than weakening existing support systems.

The potential is real, but so are the risks. When misapplied, automated systems may misinterpret distress or fail to respond with understanding. Displaced populations may be asked to share sensitive data, raising privacy and retraumatisation concerns. Algorithms trained on data from privileged populations may deepen inequities if not carefully adapted to Ukraine’s diverse population. Responsible deployment in Ukraine, and elsewhere, requires continuous monitoring, transparent reporting, and participatory oversight, ensuring technology expands the circle of care rather than narrowing it. Aligning AI with the “How Are U?” philosophy would ensure that technology strengthens the social fabric and protects human dignity.

These considerations have global relevance. Conflict-affected regions across the globe face similar challenges: widespread trauma, fractured social systems, and limited access to mental healthcare. AI has the potential to extend the reach of interventions, personalise support, and strengthen resilience, but only if it reduces, rather than reinforces, structural inequalities. In other words: mental health is inseparable from social context: not only in general, but especially in wartime. AI can help, sure; but only if it is deployed in service of justice, equity, and human connection.

True recovery requires human care, solidarity, and social support, particularly for those whose vulnerability is amplified by structural disadvantage and conflict. Yet, when guided by social justice, AI becomes a tool of equity, enabling every child, displaced family, soldier, freed detainee, and returned veteran to be seen, heard, and supported within a network of care. Social AI is grounded not in magic, but in justice, dignity, and care. By centering policy on psychosocial healing, respect for trauma, and transparent governance, Ukraine, and other conflict-affected societies, can use AI to widen the circle of care, ensuring that those who suffer most are not left behind.

A companion piece to this post can be read here.

 

 

Robert Schuett is co-founder and managing partner at STK Powerhouse, a global risk advisory firm. A former Defence civil servant, he also serves as Chairman of the Austrian Political Science Association and is a long-standing Honorary Fellow at Durham University.

Susanne Schuett is a senior executive at a Viennese outpatient mental health clinic. A psychologist by training, she holds the habilitation (venia docendi) in psychiatry from the Medical University of Vienna and serves on the advisory board of STK Powerhouse.

Photo by Luis Dalvan

Disqus comments