AI in Aid: Experimentality, Maldata, and Data Extrapolation

By Kristin Bergtora Sandvik -

Kristin Bergtora Sandvik commentary argues for a return to humanitarian ethics to deal with AI in aid in 2026.

The ongoing restructuring of the humanitarian sector is happening with little critical attention to the digital space. The adoption and adaptation of AI  happens without adequate investments in learning and strategic thinking, policymaking and guidelines, and capacity building. Debate is lacking because tech and innovation specialists within the sector, civil society, and academia have lost jobs or funding. Yet humanitarians can and must engage to understand the problems they have – and fix them. 

This commentary contributes to a shared diagnosis of current dynamics around AI and data in aid and longer-term repercussions. It sets out three claims: (1) we have moved to a phase of ‘embedded experimentality’, (2) there is a rise of Maldata, (3) there are emergent human and synthetic practices of data extrapolation. The commentary argues for a return to humanitarian ethics as a useful way of convening stakeholders around conversations about dilemmas, tradeoffs, and best-practice solutions.  

AI/d in 2026: the need to rethink harder

In their predictions for 2026 in the New Humanitarian, Loy and Worley (2026) note that the sector is not only suffering a money problem but also a problem of political hostility and indifference to the humanitarian project, resulting in an ongoing ‘forced remodeling of international aid and humanitarian response’. In this commentary, I argue that a parallel remodeling has been happening through the digital transformation of aid.  While various digital tools have been promoted as ‘game changers’ in the humanitarian technology hype cycle, AI might at last be that ‘tech revolution’ aid is said to require. 

AI contributes to an accelerating shift in how aid work is understood and carried out. AI also reshapes how humanitarian knowledge is produced, used, and trusted. Much has happened since the early days of trying to figure out what ChatGPT would mean for it all ( Sandvik 2023Sandvik and Jumbert 2023Sandvik and Lidén 2023).  At present, the rapidly growing use of AI tools for ‘work’ – i.e., imagery and data generation, analysis, planning, and coordination – takes place in the context of massive aid downsizing and overwork and an operational environment compromised by misinformation, disinformation campaigns, and pervasive cyberinsecurity. 

Reports and academic analyses of the contraction and collapse of the humanitarian sector, which began accelerating in early 2025, are emerging, but so far with little attention to the digital space.  Today, the ‘AIfication of aid’ intersects with financial, political, and moral crises across the sector. I suggest that it is likely that had the rise of AI occurred a decade earlier – during the height of Big Humanitarianism (or the so-called ‘aid industrial complex’) – its adoption and adaptation would have unfolded very differently. There would have been significantly higher investments in learning and strategic thinking, policymaking and guidelines, and capacity building. There would also have been more vigorous debate involving tech and innovation specialists within the sector, civil society, thinktanks, and academia. However, as many have lost jobs or funding, the sector has gone through a collective cognitive decline. By setting out three claims, with an attendant set of innovative concepts for discussion, I aim to contribute to a shared diagnosis of current dynamics and their repercussions for aid. 

Embedded experimentality as the new reality 

My first claim is that embedded digital experimentation is the new reality in aid. AI transforms aid work but also deepens technology-influenced changes in humanitarian practice. Tech evangelists and utopianists have often propelled the digital transformation of aid forward. According to the humanitarian experimentation literature, which has been focusing largely on discrete and bounded technology projects in aid (like biometrics, drones, or blockchain), due to its uncertain and often insecure context, humanitarian work is by nature experimental. Yet, instead of understanding what has been going on over the last 3-4 years as consciously experimental – meaning that the sector is planning and designing trialing of digital tools – I propose that ‘embedded experimentality’ is a better term. While aid actors are still innovating in the digital space, experimentation is an integrated feature of the digital infrastructure, and the humanitarian context is of no particular relevance for what technology providers offer.

However, experimentality is also caused by haphazard AI adoption across the sector. So far, we mostly have anecdotal knowledge about furtive, informal, and unsystematic use of AI in the aid sector: according to the Humanitarian Leadership Academy, while there is a high rate of individual adoption – 70% of humanitarians use AI daily or weekly – only 21.8% of organizations have formal policies in place.  Humanitarian analyst Thomas Byrnes describes this as an out-of-control ‘shadow AI’ usage with significant consequences: AI hallucinations lead to wrong prioritizations. Protection risks arise as AI use (often by non-English speakers) flattens nuance and shifts meanings. Poor training in using AI produces (protection) data blindspots. Byrnes observes that Shadow AI is ‘informal, undocumented, and improvisational’. 

The result is that ground truth and contextual understanding –  the very base of humanitarian programming – is compromised. Humanitarians are not naïve about AI, but this skepticism introduces another layer of uncertainty with respect to trust in humanitarian knowledge production. As recently noted in a study by Behl et al. 2026 of AI in humanitarian logistics

‘Participants described a growing reliance on AI-driven tools for processing unstructured data (e.g., tweets, images). These tools shape the beliefs within organizations about AI's ability to enhance operational decision-making. However, the overreliance on algorithmic outputs based on biased or incomplete data reinforced concerns about misinformation, surveillance, and reduced human empathy—especially in unpredictable, high-stakes environments. The belief that AI can “replace” nuanced judgment was repeatedly critiqued, raising doubts about its role in humanitarian settings’

We do not know how AI is shaping ground truth over time. Regardless of one’s position on digital change in the aid sector, for humanitarians, the notion of ‘ground truth’—information gathered through direct observation or measurement and thus considered accurate and dependable—is crucial for effective planning, providing feedback, and ensuring accountability. Nevertheless, it seems likely that the type of informal embedded experimentality outlined here will have particularly detrimental consequences for a humanitarian industry in crisis: humanitarians risk losing control both over ground truth, the processes through which ground truth travels toward actual interventions, and how ground truth knowledge is measured and evaluated. 

The rise of Maldata

My second claim concerns ‘bad data’ as a structural problem for aid. A possible trajectory for understanding the state of humanitarian data would be to say that the sector, within a relatively short time span, moved from data scarcity to a period of selective overload of data and information. opportunities and challenges- struggling to make data actionable, overcollecting.  I argue that this period has now largely ended, giving way to an era of ‘Maldata’ (borrowing the French ‘Mal’ for bad).   As referred to here, Maldata is an aggregate of current data misfortunes and includes but is not limited to, data impoverishment, AI slop, work slop, and data staleness. 

  • Data impoverishment. Poor data quality continues to be a challenge for the aid sector. However, the co-current onset of the aid sectors' collapse (meaning fewer humans in the loop, less access, and fewer resources to build capacity and produce ethical and strategic thinking), and the rise of AI-driven predictive analytics and a flood of synthetic humanitarian data, engenders new forms of qualitative, quantitative, and ethical data impoverishment, i.e. ‘bad data’.  Data impoverishment is also caused by forced database and digital infrastructure takedowns, the prohibitive cost of AI-tools and political and corporate hijacking of platforms leading to censorship claimsshutdowns, and access restrictions.
  • Slop. Humanitarians are currently faced with a flood of AI slop and struggling to find efficient ways to deal with it. This goes hand in hand with work slop –  badly done work using AI foisted onto colleagues for fixing. If not handled more carefully than today, there is a risk we might experience (but not properly see or understand) a sector-wide ‘AI-slopification’ of humanitarian action.
  • Data staleness. Data staleness refers to the condition where data lags and becomes outdated and/or misaligned with current reality. This happens when time-sensitive data (crisis maps, needs assessments, registrations) are not updated often enough (and there is a lag between data collection and model inference), conditions radically change on the ground, and predictive models are skewed, because they are not fed fresh data, but also because baseline data is removed or censored. While algorithms may assume the data is still valid when it isn’t, users have little ability to scrutinize how and to what extent accuracy is maintained. 

An era of human-made and synthetic Data extrapolation

My third claim refers to human-made and synthetic data extrapolation. Humanitarians are inherently resourceful and will improvise. In response to data impoverishment and staleness – and emerging from the current mashup of individual AI adoption and scattered strategic thinking – practices will emerge that are best understood as the digital humanitarian equivalent of triage, namely, humanitarian data extrapolation. Data extrapolation happens where models infer needs and risks from existing datasets rather than direct engagement. Human-made extrapolation problems are just a part of the problem. While there has always been extrapolation from available data,  the rise of a synthetic humanitarian data space exacerbates the problem: synthetic data can do real harm

For the aid sector, more effort must be invested in fleshing out the ideological and normative effects of synthetic data generation and use, and to gauge what data extrapolation more broadly does with ground truth and to humanitarian action. Possible detrimental outcomes of data extrapolation might be misallocation of resources,  gradually poorer problem-understanding, reinforcement of blindspots and biases, as well as function and mission creep (similar to the notion of ‘concept drift’ resulting from undetected incremental changes in data-generating processes and concepts over time).

Moving forward 

In the Maldata future vision of aid, AI will be the McDonald's of humanitarian data. Data staleness, slop, and general data impoverishment reshape the production of humanitarian ‘ground truths’ and the sector as a whole. Humanitarian digital infrastructures will be remade by BigTech and authoritarian governments from the Global North and the Global East. The embedded experimental nature of digital platforms means there is no escape from co-producing AI-problems. Differently resourced AI, undergirded by different restrictions of how human suffering might be described and calibrated, will produce different worlds, including one of digitally differentiated humanitarian spaces.

This is arguably bleak, both as a vision of the sector and in terms of the deterministic perspective on the detrimental impact of AI. My basic motivation for writing this commentary is that even as the aid sector and humanitarian values are under attack, there is a greater, not lesser, need to engage, discuss, and innovate. Humanitarian accountability remains important. Governance of data freshness labels and the politics of data shelf life are political processes and must be grappled with as such. 

The sector needs to get its mojo back and use the resources it has. To do this, we need to understand the problems we have. To deal with AI in aid in 2026, humanitarians must engage to develop collective situational awareness and then find ways to patch problems. Finally, in a time of new world orders, deregulation, and normative decline, it is worth thinking about how humanitarian ethics, derided for ethics-washing and lack of legal and political clout, can still be a highly useful vehicle for convening stakeholders around conversations about the identification and ranking of critical problems, dilemmas and tradeoffs related to digital transformation but also for understanding what best-practices can look like and how to develop them. 

 

 

Kristin Bergtora Sandvik (Cand. Jur UiO 2002, Harvard Law School S.J.D 2008). Professor of Sociology of Law, University of Oslo, and Professor II Sámi and Indigenous Peoples Law, UiT the Arctic University of Norway.

I am grateful to Pierrick Devidal for sharing ideas and concepts, and to Maria Gabrielsen Jumbert, Eva Johais, Andrea Düchting, Stuart Campo and Rob Grace for comments on previous drafts, and the global group of humanitarians participating in the MerlTech webinar ‘How will AI change the way humanitarians produce knowledge’ for their generous feedback on my ideas for this article. 

Photo by Google DeepMind

Disqus comments