The AI Dilemma: Can Artificial Images of War and Suffering Stir Empathy?

By Maria Gabrielsen Jumbert - 15 March 2024
The AI Dilemma: Can Artificial Images of War and Suffering Stir Empathy?

Maria Gabrielsen Jumbert explores the pitfalls and potentials of the use of AI to provide windows into humanitarian crises and human rights abuses.

AI-generated images have already been used by charities and human rights organizations to illustrate mass suffering and abuse. A lot is potentially at stake as we become exposed to more of these types of images, including public trust in what is real, and ultimately our ability to engage.

A debate was sparked when Amnesty International deliberately used images marked as ‘AI-generated’ of protesters in Colombia, receiving criticism and backlash. Other examples include AI-generated images of refugees in Greece and detention centres off the coast of Australia. More recently, similar images have been spread unintentionally, as revealed with instances of images of children in Gaza sleeping on the mud, likely being AI-produced.

Images and caring for others

Public understanding and perception of humanitarian crises, conflicts and human rights abuses is closely tied to the images that come out of such atrocities. Our ability to care for others is closely linked to our visual ‘witnessing’ of this suffering. Images are used both to document events, and to engage and move audiences elsewhere – whether for public awareness, political agenda-setting or fundraising. This witnessing has also been problematized by observers and actors alike, and awareness has grown in recent years around the selection and use of images, notably of children and people in vulnerable situations.

AI-generated images now to some extent promise to address these dilemmas and lack of access. They aim to engage us by showing real-looking people, but without exposing real individuals. They may also produce visual illustrations where real photographs cannot be taken, such as hard-to-reach conflict zones.

This can seem promising on many levels, yet it must be questioned whether they compromise the two fundamental purposes such images serve: to mobilize empathy and to document.

Witnessing: old debates, in new tech wrapping

New technologies continue to reshape how images are produced, shared and consumed: from the first live images on TV bringing the war in Nigeria’s Biafra region into Western living rooms in 1969, and the photo of the “Napalm girl” in the Vietnam war in 1972, to recent years’ citizen reporters with smart phones and photographers with drones. All reshape how we consume news, relate to information and to those we witness. While the idea of bearing witness is at the core of modern humanitarian action, scholars have also widely discussed the problematic sides of this kind of ‘watching from the outside’ (from Arendt and Sontag, to Boltanski and Chouliaraki).

A humanitarian communication dilemma: engaging, but not exposing?

For aid organizations, human rights advocates and journalists, images showing the suffering they seek to mitigate, address or report are key to their communication. Increased understanding about how images may also reproduce stereotypes and unnecessarily victimize, and questions of how to best ‘represent’ the victims that they also seek to give voice to, have emerged alongside new external requirements and regulations, such as data regulations like GDPR.  

Yet, these are not straightforward questions – as seeking to engage and move people’s empathy elsewhere, showing actual suffering and violations of human dignity, also deals with people in their most vulnerable moments.

Refugee and migrant protection

These dilemmas have an added dimension for refugee and migrant protection. Public images of people at heavily controlled borders or of families fleeing persecution may put them at risk. New facial recognition software means that even a face among many in a photograph could be matched and recognized.

More crucially, the conditions in some of these detention areas are rarely even documented, due to lack of access for external actors, journalists and photographers. Interestingly, it is in this field that some of the first instances of AI-generated images have emerged – promising to show what is happening, without exposing real individuals.

Potential and pitfalls

A first instance of AI-generated images in this field has already become a reference point, although the images were quickly withdrawn. Amnesty International used AI-generated images last April to illustrate police violence in Colombia. The images depicted a girl screaming, holding a Colombian flag and being forcibly held by uniformed riot police. This use of AI-generated images received immediate backlash, first for risking to confuse – despite marking the pictures with a notification of how they were produced. The main criticism was how this risked compromising Amnesty’s credibility. The organisation quickly withdrew the images, amid PR efforts to explain its intention to not expose real protesters.

While humanitarian agencies have taken notice of this backlash, and observing how AI may be used going forward for different purposes, two other instances show how AI-generated images may still appear as a promising option in this area.

Refugees detained

First, around the same time as the Amnesty report, an online photo exhibition was launched by the Australian law firm Maurice Blackburn, illustrating the abuses of detained migrants in offshore detention camps on Manus and Nauru islands. The social justice firm teamed up with an advertising agency to produce images that would illustrate what former detainees had told them about the conditions in the centres. As the legal case they had filed was discontinued, the lawyers wanted to find a way to still make the witness statements become known, “to do justice” to these stories.

The AI-generated images were not meant to replace existing photographs, as journalists and photographers are not allowed into the detention centres, and detainees do not have any means to take pictures. The AI images were rather giving a visual representation to the witness reports. The lawyers achieved their objective, and the images were used to catch the public’s attention.

This documentary use of AI images, however, may ultimately fail at its core purpose – illustrating something real, with images that are not real. In a world of fake news and distrust, and constantly heightened needs for a critical take towards any image, will such images simply be received as something that can and should be put into doubt?

Refugees in Greece

The second case is of the Norwegian organization called A Drop in the Ocean, who used AI to generate a picture of a refugee girl in Greece for a Halloween fundraising campaign. The picture used in social media, saying “to protect the real children, we use an AI image”.

Regular photographs used elsewhere in their communication do generally not show any of the protection seekers’ faces but seen from behind or further away. The desire to use the image of a young girl’s face, close up, nevertheless reflects a wish to engage through identification, and to “reach through” in an ever more competitive social media stream, already focused on visuals – to rapidly catch attention.

But here again, using the AI image of a young girl’s face to mobilize people’s empathy may ultimately fail at seeking to move, as the picture is not real. How will this lack of authenticity, going forward, affect our ability to actually engage and identify with those portrayed?

Trust and our collective memory bank

These two examples were marked as AI-generated images, yet even so, may contribute to a further blur between what is real and fabricated, trustworthy and not trustworthy. The use of AI images to illustrate real events is contradictory – images trying to show something real using images that are not.

It was recently revealed by Deutsche Welle news agency that certain images of children in Gaza that  circulated online were likely AI-produced. Where the absolutely tragic conditions the children in Gaza live in have otherwise been documented through a number of important sources, not least local journalists, such AI-generated images risk casting doubt on other reliable images. When AI images compete with real pictures, it risks that real images are rejected.

In an age of increasing disinformation, trust is a scarce resource. Trust is fundamental in any communication process, and a prerequisite for any message to be received. There is also a time aspect here: Right now, when we have seen many other real images, an AI-generated image can "do the job" of illustrating a case - but what happens if we are inundated with AI images and simply have fewer real pictures to relate them to?

Will an increase in AI-generated images affect our ability to be touched by the suffering of others, and our willingness to learn and understand?

 

 

Maria Gabrielsen Jumbert (PhD, Institut d’Etudes Politiques, SciencesPo Paris 2010) is a Senior Researcher at the Peace Research Institute Oslo (PRIO). Jumbert works on humanitarian and security interfaces in the European borderlands,  digitalization and citizen humanitarianism.

This blog is an output of Do No Harm: Ethical Humanitarian Innovation and Digital Bodies, funded by the Research Council of Norway. I am grateful to Kristin Bergtora Sandvik, Marit Moe-Pryce, Kristin Skare Orgeret and Michelle Delaney for input and inspiration. A shorter version of this piece was published in Norwegian in the Norwegian weekly Morgenbladet.

 

Photo by Tara Winstead

Disqus comments