When Algorithms Negotiate Treaties

By Azira Ahimsa -
When Algorithms Negotiate Treaties

Azira Ahimsa reimagines a future of international agreements in which we reconcile human ambiguity with machine precision, as states automate more of their governance and diplomacy begins to collide with code. 

For most of modern history, diplomacy has depended on artful imprecision – carefully chosen ambiguities that let rivals sign the same text while interpreting different futures. Today, that diplomatic grey zone is colliding with a technocratic impulse to automate and optimize. As governments outsource more policy to code – allocating carbon credits by algorithm, routing grain corridors with machine learning, verifying compliance via satellites and smart contracts – treaties begin to look less like prose and more like executable programs. “Machine-readable” agreements promise fewer loopholes and faster enforcement. Yet, they risk hardening what diplomacy has long kept soft: the pauses, fictions, and cultural norms that make compromise possible. The question now is not whether algorithms will enter the negotiating room, but what kinds of diplomacy their hard logic can enable.

The Temptation of the “Executable” Treaty

Think of an executable treaty as a normal legal agreement paired with machine-readable rules, including actual code, that can run on computers. These rules check, calculate, or even trigger parts of the deal. The prose still remains for lawyers and legislatures, but key clauses are also written as formulas, data schemas, or smart contract logic that software can execute. Executable bits promise speed, consistency, and transparency via dashboards and audit trails but they also firm up choices that used to be flexible (what to measure, which model, which baseline etc.). Policies are no longer just words on paper, they are being built into data systems and algorithms that make real decisions – which in turn need oversight. If a clause can be unambiguously computed from agreed-upon data, it is a candidate for “execution.” If it requires context, humanity, or cultural nuance, humans should be kept firmly in the loop.

In such agreements, algorithms have four main roles: sensing and scoring for verification and compliance, evaluation and thresholds for auditability and predictability, automation and execution for enforcement and administration, and modelling for negotiation and review (i.e. decision support). For compliance, code mainly embeds “if-then” logic and triggers tied to agreed-upon data. For example, in a carbon enforcement treaty, if methane intensity stays below X units for Y months, then sanctions can be lifted, funds released, or tariffs changed. These are deterministic rules implemented in code or smart contracts, fed by mutually accepted “oracles” (satellite feeds, meters, customs data). Code also provides a verification layer that turns raw data into treaty-relevant metrics such as detecting flaring in imagery, counting fishing efforts, and flagging anomalies in trade flows. Coding can automate with brakes built in; some actions can execute automatically (for example, escrow release, quote updates), but good designs include cooling-off periods and human overrides so edge cases and anomalies do not become instant breaches. Finally, code supports decisions, they do not author treaties themselves. Generative or analytical models can help draft language, test scenarios, and spot inconsistencies, but they are advisory. The binding text is still negotiated by humans; the code only implements what is computable from that text. The political choices, such as what to measure, thresholds, and baselines, are still negotiated, and that is where oversight and appeal mechanisms matter.

Such diplomacy presents three broad advantages. First, code can reduce opportunism. If sanctions relief triggers only when satellite-detected flares fall below a verifiable threshold, or if tariff reductions phase in automatically when import parity is reached, compliance becomes telemetry rather than trust. Second, algorithms can widen participation. Smaller states that lack large negotiating teams can lean on open-source models to simulate scenarios, price risks, and audit others’ claims. Third, automation can build in necessary off-ramps and de-escalatory measures. Just as markets use circuit breakers to prevent panic selling, algorithmic “slowdowns” could throttle feedback loops that turn tariff skirmishes into trade wars.

There is also a democratic case. Diplomacy often happens in backchannels where accountability is fuzzy by design. Encoded clauses, published and auditable, could bring statecraft into the daylight. Imagine a public dashboard that shows, in near-real time, whether a methane pledge is being met or a fishing quota breached. Citizens could interrogate the evidence behind a minister’s boast or a rival’s accusation. In an era of contested facts, machine-verifiable claims have appeal.

Where Code Snaps

Naturally, anyone who works with complex systems knows the fragility behind clean metrics. Treaties endure because they tolerate ambiguity: a missed deadline forgiven after a phone call; a clause left artfully vague so both sides can save face at home. Executable obligations threaten to turn every deviation into a breach. The more rigid the code, the less room for the human alchemy – empathy, patience, ritual – that converts compliance into consent.

Measurement itself is often political. An emissions baseline might favor some economies over others; a conflict-monitoring model trained on one region’s imagery may misread another’s terrain; a “neutral” optimization for shipping routes can externalize risk onto coastal communities with less voice. Once translated into code, these judgments harden into default reality. Disputes then migrate from treaty text to model parameters – but now the argument is only legible to a small priesthood of engineers and econometricians.

Opacity compounds the problem. Private vendors might claim proprietary protections and security agencies may classify features that matter. In effect, this allows crucial features of the system to sit outside the treaty’s oversight by being rendered inaccessible rather than by explicit exemption. Politicians will also be able to hide behind “the algorithm” as they once hid behind “market forces.” When accountability is outsourced to math, responsibility diffuses. Who is answerable when a smart contract misfires and freezes aid flows, or when a satellite misclassifies harvests and triggers automatic tariffs? Governance by dashboard can produce a world that looks compliant and behaves otherwise.

There is also the adversarial frontier. Anything with incentives attracts manipulation. Goodhart’s law – when a measure becomes a target, it ceases to be a good measure – applies with special force to executable treaties. Models can be gamed; sensors spoofed; thresholds skirted. States will optimize the metric, not the norm, and clever firms will sell the playbooks.

The Cultural Cost

Diplomacy rests on culture: notions of honor, hierarchy, and timing that vary across societies. Codifying deals risks erasing that substrate. In some traditions, ritual ambiguity is not a bug to be debugged but a form of respect – an agreement to coexist without forcing premature precision. In others, explicitness signals distrust. A treaty-machine that insists on a single grammar of rules will quietly standardize one worldview at the expense of others. What we call “efficiency” may be cultural imposition by code.

Yet algorithms can preserve pluralism if we design for it. A model can carry multiple ontologies; an engine can output ranges rather than verdicts; a smart contract can require a human quorum to execute. We do not have to choose between society and software. The real choice is whether to build systems that recognize different ways of knowing – or to overfit the world to what we can easily compute.

From “Trustless” to Trustworthy

If algorithmic diplomacy is inevitable, we should make it serve peace rather than shred it. Three principles are essential.

  1. Disclose and contest: Every executable clause should come with a plain-language model card that includes data sources, performance across contexts, and plausible harms. Treaties should embed a right of algorithmic appeal – a formal channel for parties, and affected communities, to challenge the metrics that govern them. Independent audits must be routine and public.
  2. Keep a human in the loop – and a hand on the brake: “Automatic” must never mean irrevocable. Reversible triggers and cooling-off periods should be built into smart contracts. Mixed panels with diplomats, domain experts, and lay representatives should be established to interpret borderline cases.  Code should be treated as a governor, not a governor-general.
  3. Pluralize the stack: Monocultures should be avoided. Parties should be allowed to choose among validated models whose differences reflect legitimate priorities, require ensemble methods that incorporate multiple perspectives, and maintain treaty sandboxes where new algorithms can be tested without geopolitical consequence. Redundancy is not waste in governance – it is insurance.

With these guardrails, algorithmic tools can do what they do best: furnish shared evidence, reveal trade-offs, and enforce the parts of an agreement that all sides genuinely want to stabilize – without pretending to replace judgment. The aim is not diplomacy without diplomats, but diplomats with better instruments and explicit norms about when to put them down.

A New Craft for a New Century

The great diplomats of the last century were poets of ambiguity and architects of institutions. Their successors need a third craft: systems literacy. They must read a model like they read a memo, hear what a dashboard is not telling them, and spot when a variable encodes a value that was never negotiated. Foreign ministries should cultivate “ambassadors to algorithms” who can translate between political aims and technical designs – and recognize when code is being used to smuggle policy in by other means.

We should not idealize the old order. Backchannels and “constructive vagueness” have enabled both peace and impunity. But neither should we rush into a world where every clause clicks like a ratchet. The future of statecraft will be woven from both fibers: a mesh of algorithms and ambiguities, dashboards and dinners. Code can carry commitments across borders at machine speed, but only humans should decide which commitments deserve that power – and when to let a silence, or a story, do what code cannot.

 

 

Azira Ahimsa is the 2025 Rising Expert on Technology at Young Professionals in Foreign Policy (YPFP). She holds an MSc in Defence, Development and Diplomacy from Durham University (2016), is ASEAN Co-Chair of the US ASEAN Young Professionals Association, and has served in digital trade roles at the Australian Trade and Investment Commission and the UK Department for Business and Trade in Jakarta.

Photo by Sabrina Gelbart

Disqus comments