Media Support Needs a Tech Upgrade Before It’s Too Late

By Semuhi Sinanoglu -
Media Support Needs a Tech Upgrade Before It’s Too Late

Dr. Semuhi Sinanoglu argues that AI can transform media development programming by enabling smarter early warning systems to protect journalists, especially against fast-moving threats.

Earlier this month, dozens of organizations worldwide celebrated World Press Freedom Day, including through a series of events in Brussels. A highlight was UNESCO’s timely signature event on the impact of Artificial Intelligence (AI) on press freedom and the media.

After attending several side events in Brussels, one impression stuck with me. While the development cooperation ecosystem has made notable progress in setting standards for media support, and there are some ongoing initiatives in translating these guidelines into a concrete and actionable agenda, donors and practitioners still remain behind the curve on integrating new technologies into their programming. This is not just a missed opportunity, it’s also a risk. At a time when development cooperation itself is being questioned in some political circles, sticking to old-fashioned approaches is no longer a viable option. 

AI is not just a boogeyman to fear because of its potential to be used to generate deepfakes and disseminate disinformation. It is also a powerful tool that could render media aid more scalable, responsive, and innovative. And development cooperation on media support urgently needs a tech upgrade.

With shrinking resources and sudden funding cuts, the stakeholders have realized how underprepared they are for short-term, high-intensity threats against media freedom. Journalists are often the first targets of power grabs and democratic transgressions. It is no coincidence that the global number of journalists imprisoned reached a near all-time high. The recent Media Freedom Rapid Response has also raised alarm bells for Europe, documenting press freedom violations against more than 2500 media-related persons or entities last year. There is a clear pattern to the attacks against journalists – they often start with smear campaigns on tabloids, followed by arbitrary detentions, cyber harassment, and online death threats. Therefore, the rapporteur for the European Democracy Shield has recently cited the urgency of the full implementation of the European Media Freedom Act (EMFA) and the Anti-SLAPP Directive. 

But that’s not enough. 

First, there is a greater need for emergency grants with no strings attached to quickly evolving threats to media freedom, especially for journalists at risk. In addition to long-term commitment to media support, donors must also recognize the implications of shifting geopolitics and increasing trends of autocratization, both of which demand greater flexibility and faster response times to protect journalists at risk and media freedom. But to make short-term support effective, donors and practitioners need to get an early sense of potential emerging risks. Without the foresight and capacity to act quickly, they risk being caught like deer in headlights.

That is why building smarter early warning systems (EWS) for fast-moving media threats should be a top priority for donors and practitioners.

Recent years have seen the emergence of AI-based EWS tools, such as ACLED’s Early Warning Research Hub or Global Conflict Risk Index (GCRI), to predict conflict and political violence. Yet few systems are designed to monitor threats to journalists on a large scale.

Donors and practitioners should invest in EWS capable of detecting smear campaigns and coordinated troll or bot attacks on journalists, not only across digital and print media, but also through court databases, legal portals, wherever possible, to identify judicial harassment. These systems would help development cooperation stakeholders identify emerging threats to media freedom at the country level and also generate personalized risk scores for journalists and outlets, supporting timely legal and financial aid provision. 

First, these systems should scrape not only major English-language news sources but also local digital media, where often smear campaigns originate. Second, the system should simultaneously monitor multiple social media platforms, particularly those known as hotspots for troll activities. They must operate year-round, not just during sensitive times like elections.

These systems should not just serve donors or analysts, but also journalists themselves. A user-friendly dashboard should visualize their risk exposure through a real-time monitoring of troll activity or spikes in mentions, and include a crisis trigger alert to warn of escalating threats. There should also be a real-time data flow from trusted users to the system to confidentially report legal summons, travel bans, or other types of judicial harassment, and to verify and crowd-map suspected threats as they emerge.

Development cooperation on media support must keep pace with the threats it seeks to tackle. Donors and practitioners should stop treating tech innovation as a future add-on and start funding and implementing it as a frontline defense. 

Before it’s too late.

 

 

Dr. Semuhi Sinanoglu is a researcher at the German Institute of Development and Sustainability (IDOS) in Bonn, Germany. 

Photo by Mido Makasardi ©️

Disqus comments