The AI Gamble could Trigger a 2008-Style Crash

By Yu Xiong -
The AI Gamble could Trigger a 2008-Style Crash

Yu Xiong argues that we’re seeing the same lethal combination of overvaluation, overdependence, and under-regulation that toppled the financial system in 2008, but there are alternatives.

It’s said that insanity is doing the same thing over and over again and expecting different results. The relentless cycle of AI innovation may not be insanity, but it reflects an accelerating logic that’s becoming increasingly hard to steer.

Doubts are reaching boiling point. One in five Gen Zers already fears losing their job to AI. Billionaire investor Peter Thiel just inexplicably dumped his entire $100 million stake in chipmaker Nvidia. And even as Alphabet's share price doubled, Google CEO Sundar Pichai spoke of “irrationality” in the boom, warning that “no company is going to be immune.”

It’s 2025… but it’s starting to look a lot like 2008. And there’s a growing risk that the world’s feverish economic dependence on AI could spell trouble ahead.

Venture capitalists have bet billions on start-ups with no clear path to profit. Federal subsidies have inflated corporate valuations. Dazzled global leaders talk as if AI will fix everything from traffic to climate change, with Sir Keir Starmer recently stating that soon, "every industry in the UK will be a tech industry" because of AI.

Washington has made it the backbone of growth; responsible for more than 90% of expansion in the first half of this year. Now, the U.S. government is pushing the $500 billion Stargate project, hardwiring AI into national infrastructure and building nuclear mega-projects to power it. 

Buoyed by the success of DeepSeek, China’s government is investing $56 billion in the sector this year alone, deploying AI into 90% of its economy over the next five years. And in Europe, a new €1 billion plan to ramp up AI in key industries follows February’s launch of the €200 billion InvestAI initiative – driving an “‘AI-first’ mindset,” according to Ursula von der Leyen.

This rush reflects a familiar mindset in policymakers and tech entrepreneurs alike: the belief that if technology can move faster, it should – even if society, regulation, or ethics can’t keep up. This is the idea of “right accelerationism”: that speed itself equals progress, and that disruption is always good. But history tells us that acceleration without direction often leads to collapse.

And there’s no excuse for complacency because the warning signs are evident. Investors are paying far more for tech companies than these firms are earning or producing – a symptom of an accelerating market dynamic where optimism outpaces oversight. The recent $200 billion loss in Meta’s market value exposed the fragility of the AI boom. Economists are already drawing comparisons to the optimistic years before the 2008 crash.

I study how emerging technologies interact with real-world systems – from carbon markets to financial networks – and I see the same vulnerabilities forming again. If governments keep siphoning subsidies and national projects onto a handful of private companies, it could end up creating a dangerously unstable AI sector that’s “too big to fail.”

The story we keep hearing is that AI represents the end of everything that has held humankind back. We’re starting to believe we can achieve limitless productivity, limitless profit, and limitless potential. That narrative is where the danger lies. Beneath the buzz and hyperbole, we’re seeing the same lethal combination of overvaluation, overdependence, and under-regulation that toppled the financial system.

Entire sectors – finance, energy, defence – are betting their futures on systems they barely understand. If one major provider stumbles or gets hacked, the damage could ripple through the economy, just as Lehman Brothers’ collapse did in 2008.

Cheap credit and government incentives pushed investors toward mortgage-backed securities in the 2000s. Now, tax credits and federal contracts are doing the same for AI infrastructure. The language is nearly identical: this is too important to slow down, too strategic to regulate, too innovative to fail. But fail it can.

There are, however, innovators trying to steer AI in a more holistic direction. Sachin Dev Duggal, and a technologist known for his work on human-AI collaboration, says the goal isn’t faster answers, but deeper understanding. “We need systems that build bridges between disciplines,” he says, “and which capture the best of human reasoning while avoiding the dysfunction of human collaboration.”

The 2023 EY Entrepreneur of the Year, Duggal is developing systems that think with us, not for us. His focus on creating what he calls a “partner in thought” – AI that extends rather than replaces human reasoning – offers a glimpse of what responsible innovation can look like. 

In an industry obsessed with scale, tech entrepreneurs like this, who are putting emphasis on shared cognition and memory between humans and machines, point to a healthier path: technology that supports people and societies instead of subsuming them.

Every major economic revolution creates winners and losers. The 2008 crash destroyed millions of jobs and homes while leaving a few institutions even stronger. The same imbalance is emerging now.

The fact is that workers are being displaced faster than they can retrain. Coders, designers, teachers – people who once felt safe in their professions – are watching algorithms take over their tasks. Meanwhile, profits and power are concentrating in fewer hands.

If public trust collapses, the backlash will be fierce. The AI sector could soon face the same outrage Wall Street and The City did after the crash: accusations of greed and arrogance. And once public confidence breaks, it’s hard to rebuild. 

Sensible management is key. Global powers should moderate their AI spending, focus on projects that deliver measurable public benefit, and enforce transparency and accountability across both public and private sectors. They should also diversify their growth strategies by investing in clean energy, advanced manufacturing and education instead of relying on one unpredictable technology to carry their economies.

AI can drive prosperity, but only when paired with good governance. Without guardrails, it will amplify inequality, fragility, and mistrust. The last time optimism outpaced oversight, many developed countries almost lost their financial systems, but this time, they could lose their technological one.

Unless the West changes course, it won’t be leading the AI revolution – it will be underwriting its collapse.

 

 

Professor Yu Xiong integrates advanced technologies like AI and blockchain to manage carbon emissions. His paper on Bitcoin’s carbon footprint in Nature Communications has influenced global policy, including Tesla’s decision to halt Bitcoin transactions. He chaired the advisory board of the UK All-Party Parliamentary Group on Metaverse and Web 3.0.

Photo by Pixabay

Disqus comments