AI is Giving State-linked Hackers a New Edge

By Matt Ince -
AI is Giving State-linked Hackers a New Edge

Matt Ince argues that without coordinated action, AI will continue to erode digital security, amplify systemic risks and empower a growing range of actors to exploit vulnerabilities.

Picture a scenario where a single line of AI-generated code can bring a national utility to a halt or enable the theft of a leading tech firm’s intellectual property. This is no longer the realm of speculative fiction. Artificial intelligence is transforming the international security landscape, giving state and non-state actors greater power to disrupt economies, influence societies and challenge the rules and norms that have traditionally underpinned global stability.

The central trend is stark. AI is accelerating the capabilities of hostile cyber actors faster than governments and companies can defend against them. Major states like China, Russia, Iran and North Korea already appear to be racing to embed these tools into their digital arsenals, with use cases ranging from malware development to influence campaigns

Beijing appears to be particularly active in this area. In a report released in February 2026 detailing malicious uses of AI, OpenAI said it had recently banned a ChatGPT account linked to an individual associated with Chinese law enforcement who attempted to use the model to plan a covert influence operation targeting the Japanese prime minister. 

The same report also described earlier operations by this user aimed at suppressing dissent and silencing critics both online and offline, domestically and abroad. According to OpenAI, these campaigns appeared to be large-scale and resource-intensive, involving thousands of fake accounts across dozens of platforms, and the use of locally deployed Chinese AI models like DeepSeek and Qwen.

But the threat is not confined to sovereign states with vast public coffers. The knowledge and expertise gaps that once held back low-level cybercriminals and hacktivist groups are shrinking. AI is essentially acting as a force multiplier for anyone with a motive and an internet connection – enabling a greater range of threat actors to potentially discover software vulnerabilities and scale influence operations with a level of sophistication that threatens to outpace current defenses.

Evidence that AI is giving state-linked hackers and other cyber criminals a new edge is rapidly accumulating. In October 2025, OpenAI disclosed that a Russian-language operator had used multiple ChatGPT accounts to prototype and troubleshoot technical components designed to enable credential theft. At the same time, networks believed to originate in Cambodia, Myanmar and Nigeria were found abusing generative AI tools in apparent online fraud campaigns, illustrating how both state-linked and criminal actors are exploiting the technology.

The scale of this shift is striking. Google reported in early 2025 that threat groups from more than 20 countries were already using its AI model, Gemini, to map out target networks and develop tailored malware that can slip past traditional anti-virus filters unnoticed. A study by the Massachusetts Institute of Technology found that AI played a role in 80 percent of nearly 3,000 ransomware attacks analysed between 2023 and 2024, significantly increasing their speed, efficiency and resilience against defensive measures.

More concerning still, AI is no longer merely amplifying existing tactics – it is beginning to enable new ones. In November 2025, Google’s threat intelligence group reported that it had observed that various threat actors are no longer only leveraging AI to enhance their productivity. Instead, they are deploying novel AI-enabled malware to generate malicious scripts and functions on demand during active operations, signalling a move toward more adaptive and automated cyber weapons. 

Russia-linked groups, in particular, have reportedly experimented with AI-assisted malware. This comes amid intensified Russian efforts to probe European government defences and inflict economic costs on European businesses. In early December 2025, NATO’s head of cyber operations warned that the Kremlin may have supported ransomware groups behind costly attacks on British companies, including a high-profile operation targeting retailer Marks & Spencer, whose profits for the first half of the year reportedly fell by 99 percent after the incident.

Beyond cyberspace, the risks extend further. In June 2025, OpenAI cautioned that future models could lower the barrier for non-experts to create pathogens by effectively providing step-by-step guidance on how to engineer biological hazards. Taken together, these developments point to a security environment in which the pace of technological change is steadily outstripping the world’s ability to contain its misuse.

Looking ahead, the convergence of AI and quantum computing – although still years from full deployment – threatens to further destabilise cybersecurity. Sensitive data held by governments and businesses that is currently considered secure because it is encrypted may increasingly be seen as worth stealing, as advances in quantum computing could soon enable that encryption to be broken. It seems very probable that future AI advances would allow threat actors to conduct these operations more precisely and at greater scale.

The result of these developments is a rapidly expanding attack surface where traditional deterrence and defensive measures may prove inadequate. The risk of intellectual property theft and espionage targeting firms in the foundational industries of the next decade – from high-end chip manufacturing to the very labs developing quantum processors – is therefore likely to grow. 

Global governance frameworks have yet to catch up. Few international agreements regulate AI-enabled cyber operations or autonomous weapons, and progress towards adopting new approaches is often mired in budgetary hurdles and the slow churn of policy-making. 

Without coordinated action, AI will continue to erode digital security, amplify systemic risks and empower a growing range of actors to exploit vulnerabilities in governments, corporations and societies. Indeed, the UK’s National Cyber Security Centre – part of the UK intelligence community – has warned in recent months that by 2027 a significant digital divide is likely to emerge between systems resilient to AI-enabled threats and a large proportion that will remain vulnerable.

The path forward is therefore demanding but clear. Governments and businesses must strengthen international norms governing the use of AI in cyber operations and invest far more in intelligence and foresight designed to anticipate how technological shocks will reshape behaviour. Without this, they risk entering a digital world in which traditional deterrence is ineffective and escalation is difficult to control.

 

 

Matt Ince is an Associate Director at Dragonfly Intelligence from Dow Jones and managing editor of its Strategic Outlook 2026: A Multisphere World report. He leads the firm's intelligence coverage of geostrategic and global risks. Matt spent almost a decade working in the UK Government’s national security community, coordinating inter-agency efforts to produce all-source intelligence assessments on emergency geopolitical and security risks. He is also an Associate Fellow at the Royal United Services Institute (RUSI) – the UK’s leading defence and security think tank.

Photo by cottonbro studio

Disqus comments