Is Human Well-Being Comparable with an AI Frenzy Driven by a Massive, Speculative Pursuit of Profit?

By C. J. Polychroniou -
Is Human Well-Being Comparable with an AI Frenzy Driven by a Massive, Speculative Pursuit of Profit?

Artificial Intelligence (AI) is one of the hottest topics out there. And for a good reason. AI is transforming industries and everyday life. But how much do we understand about AI? How powerful is it? Is it comparable with capitalism? How will it affect the workforce? Is it becoming “too important to fail?” Is there a progressive alternative to AI? 

C. P. Chandrasekhar, a world-renowned scholar of finance, financial policy and development and Senior Research Scholar at the Political Economy Research Institute (PERI) at the University of Massachusetts Amherst, addresses these questions in the interview that follows. He is emeritus professor at the Centre for Economic Studies and Planning, Jawaharlal Nehru University, New Delhi where he taught for more than 30 years. In addition to many articles in academic journals and serving as a regular economic columnist for Frontline (Economic Perspectives), Business Line (Macroscan)and Economic and Political Weekly, he is the author of scores of books, including Karl Marx’s Capital and the Present. In 2009Chandrasekhar received the Malcolm Adiseshaiah Award for contributions to economics and development studies. 

J. Polychroniou: Artificial Intelligence (AI) has been integrated into business and our daily lives. Among other sectors of the economy, AI is said to be transforming the finance and banking industries.  I’d like to start by asking you about capitalism and AI. Is capitalism compatible with AI?

C. P. Chandrasekhar: Innovation and technological change under capitalism are shaped by the needs of Capital in pursuit of profits. But that does not make technological change under that capitalism all bad. Both under capitalism and beyond, these technologies can be shaped and deployed to serve the needs of a more people-centric sustainable development agenda.

A matter of concern is how the observed evolution of Artificial Intelligence is being influenced by the needs of Capital. One obvious way is through the displacement of labor with attendant implications for employment and the conditions of labor. But, in the hype over AI, the transformation of activity induced by AI in the rest of the economy is expected to give rise to new opportunities for employment that would neutralize AI’s expected substitution of humans. Whether labor substitution would only reduce the probability of human error, or go awry, driven by sycophantic or hallucinating bots or software robots is a moot question. According to the hype, however, Artificial Intelligence (AI), a generic, general-purpose technology would ensure revolutionary transformation of almost every area of human activity, with a combination of productivity increase, employment expansion and improved human well-being through effects on delivery of health and educational services, for example. None of that is as yet validated by experience.

The other impact of concern seems to be an intensification of the atomization of society where relations with bots increasingly substitute for a wide variety of human relations resulting in new forms alienation. The effects of this are already being widely observed and reported.

An overarching problem is that since the evolution of AI and its deployment in multiple activities is being largely controlled and driven by private capital, there is little effort to assess and counter what could be socially and economically disruptive consequences of that development. Oligopolistic competition in AI development only makes its evolution more “autonomous” and “spontaneous.” Moreover, the speed of evolution and (as some have correctly argued) the ambiguity as to why, given the way large language models are developed and trained, they tend to deliver the “capabilities” they do makes regulation difficult and slow to respond.

So, the question is not whether capitalism is compatible with AI, but whether social cohesion and human well-being are compatible with the AI evolution delivered by Capital in unbridled pursuit of profit.

C. J. Polychroniou: How exactly is AI transforming finance and what should we expect its impact to be on financial markets?

C. P. Chandrasekhar: There are two issues to be discussed here. The first is how AI is transforming finance. The other is how yield-hungry finance is subordinating and driving AI development.

The transformation of finance and financial markets by AI follows the ongoing automation of code writing tasks in the finance space and the introduction of algorithms. Algorithmic trading speeds-up investment responses to market movements and even determines the size and sequencing of components of a large transaction based on stored instructions, to ensure better returns without the need for continuous human intervention prone to error. There is considerable evidence suggesting that this leads to increased market volatility and even “flash crashes.” With AI agents that are trained to do such tasks more “independently” and faster, these tendencies have only intensified, with fears that a range of human-run interventions would now be performed by digital proxies.

 

But the more concerning trend is the subordination of AI by finance in search of high returns based on exploding valuations driven by speculation. The surge in the Nasdaq and S&P 500 indices is reflective of this speculation-driven boom. A few firms have driven the rising trend in these indices, exaggerating their weights in determining market performance. Leading them have been the “Magnificent Seven” (Alphabet, Amazon, Apple, Meta, Microsoft, NVIDIA and Tesla) that have been the best performing stocks. Six of those seven firms are not merely so-called “tech firms,” but have a presence of one kind or another in the Artificial Intelligence (AI) space. They account for close to 30 per cent of the weight by market capitalisation in the S&P 500, driving the movement in that index. Ten leading firms accounted for almost 80 per cent of the S&P 500’s net income growth in the year to November 2025.

The spike in the share prices of these firms has meant that the price earnings ratio of many of them are well above the average of around 19-20 for S&P 500 firms. NVIDIA, which crossed the $5 trillion market capitalization mark, recorded a price to earnings (P/E) ratio (calculated by dividing the company’s stock price by its earnings per share over the previous 12 months) of 58 in August of 2025.  Oracle, which is diversifying into the AI space from being a provider of database software, also recorded high figures. And there are other smaller companies breaking records. Palantir, which is an AI-powered data mining company, notorious in some circles for allegedly facilitating state surveillance, is being contracted by a range of commercial firms seeking to deploy artificial intelligence. It has seen its stock price more than double three years running.

A high P/E ratio indicates that investors are betting on the revenues of these firms rising significantly in the future, relative to their performance in the recent past, warranting higher stock price and capitalization values today. More diversified firms may be able to protect and raise their revenues irrespective of the future of AI. But AI-specific firms would soar, survive or crash depending on how successful AI is.

That explains why AI firms have been hyping up the potential of a future AI-powered world. And wherever those activities are commercial, it is seen as promising rapid and large increases in profits following deployment of the technology.

There is, however, much cause for scepticism. According to an MIT study,  just 5 per cent of AI projects are extracting value, while the majority do not record any measurable profit impact. In that assessment: “Most GenAI systems do not retain feedback, adapt to context, or improve over time.”

That would mean that unless there is a dramatic transformation of AI performance in use, deployment and willingness to pay would taper off (or even decline) till an uncertain turnaround actually materializes. There is evidence that business uptake of AI tools is stalling. The problem is that heavily AI dependent firms may not be able to wait for long for revenues and returns, given the huge investments being made in AI development

C. J. Polychroniou: Financial instability is inherent to capitalism. Doesn’t AI bring extra risks to the financial system? Indeed, is there a conceptual framework for assessing the systemic implications of AI for the financial system?

C. P. Chandrasekhar: The biggest threat arises because the speculative spike in share values has fueled huge expenditure outlays on acquisition of chips, investments in data centers, employee remuneration amplified by competitive poaching of talent, and investments in the power needed to support the development boom. Firms like OpenAI have been outlaying huge sums on their own operations as well as on contracts with chip makers like NVIDIA, investments in subsidiaries (like Amazon Web Services, Microsoft Azure, and Google Cloud Platform), and purchases of the services of independent data centers and cloud computing firms like Oracle. Those hardware and service providers in turn are investing large sums to expand operations and production to meet this rapid growth in demand. S&P Global estimates that as a result of the persistence of the “global construction frenzy that shows no signs of slowing,” investments in the land, buildings, hardware and energy to establish data centers totaled $60.8 billion in 2024 and $61 billion by November in 2025. But that is just what is happening in one corner of the AI space. According to Goldman Sachs, “AI hyperscalers” spent $106 billion in capital expenditure in just the third quarter of 2025, with that figure reflecting a 75 per cent year-on-year growth rate. It estimates that spending in 2026 as a whole would exceed $525 billion.

These firms and those investing in and lending to them are predicting rapid growth based on projected demand hinging on an AI boom. But, it appears that if and when AI models find their feet, they may not be able to extract as much revenues as expected because of competition. Competition between OpenAI’s ChatGPT, Google’s Gemini and surprise entrants like DeepSeek from China (promising to offer comparative features with much lower investment) could belie expectations of firms outlaying huge sums of capital in pursuit of promise high yields.

That prospect is particularly daunting for two reasons. First the role of debt in financing the investments in the AI space. AI-related companies in the US alone have issued bonds valued in excess of $200 billion in 2025, with sale of bonds to the tune of $180 billion by a few firms like Meta, Alphabet and Oracle accounting for a quarter of corporate borrowing in 2025. The unbridled spending by these firms has been encouraged not just by hugely optimistic estimates of the future of AI, but by the large volumes of still cheap liquidity available in the system because of the easy money policies of central banks and the liberalized financial system which has seen non-bank financial players, such as private equity/credit firms, mobilizing that liquidity and deploying it for profit. It is now becoming clear that a disproportionate share of such funds is being directed to AI and AI-related firms. 

Those are liabilities that need to be serviced independent of the revenues earned. But the risk involved is being discounted because of the large volume of cheap and yield-hungry liquidity in circulation.

The fragility that derives from such trends is substantially more than visible, because of the practice of “circular financing”. NVIDIA, for example, intended to invest $100 billion in OpenAI, which in turn has promised to buy $100 billion of NVIDIA chips for its ChatGPT development. That kind of entangled and concentrated exposure of firms riding on the promise of a huge AI profit boom increases fragility and the adverse impact that a collapse of that boom may entail.

Thus, the euphoric rise in substantially leveraged investments rides on the expected performance of a few entangled firms in a single tech space. This concentrated exposure of financial firms and investors based on mere expectations of dazzling future earnings has raised concerns that once again the US is the centre of a bubble that could unravel, as occurred in 2008. Yet the government and regulators are not stepping in to temper if not end the euphoria, because the investments that the boom is giving rise to and the luxury consumption that the beneficiaries of the financial boom are indulging in are partly responsible for much of the growth the US economy records.

C. J. Polychroniou: There are concerns about an AI bubble and that it may burst because of vast AI investments, but David Sacks, who is President Trump’s artificial intelligence and crypto czar, has gone on record stating that there will be no government bailout of the AI industry. Yet Sarah Myers West and Amba Kak argued in a recent Op-Ed in the Wall Street Journal that the government is already bailing out AI. What are your own thoughts on this matter? 

C. P. Chandrasekhar: The massive AI spend financed with debt and fueled by and fueling share price valuation spikes has exposed many sectors of the US economy to the AI bubble. This implies that if and when the AI bubble bursts the fall-out would be wide and severe. This makes the AI sector “too important to fail.” So, the state would have no option but attempt a bailout. The uncertainty relates to the ability of the government to implement a bailout and therefore to the success of that effort. The Federal Reserve’s balance sheet is so bloated that it would be hard pressed to inject as much cheap liquidity into the system to save financial and non-financial firms as it did last time. And the elasticity of the spending power of the US Treasury is also likely to be limited by the political standoff over deficits and spending that have led to repeated prolonged shutdowns of the US government. With the capacity to bail out firms and the economy thus restricted, stalling the downturn would be difficult. But then, those governing capitalism never learn enough from history to prevent periodic collapses—even when that error could precipitate a crisis that is as bad as it was in the 1930s.

C. J. Polychroniou: One last question regarding AI and capitalism. There are many experts, including Geoffrey Hinton, the so-called “Godfather of AI,” who predict that AI will produce mass unemployment because that’s how capitalism works. Is there a leftist alternative to AI?

C. P. Chandrasekhar: Since so little is really known about where this technology is going, framing an “alternative” is not easy as of now. The progressive perspective on intervention today must focus on the evolution of the technology. It is imperative that the development of the technology is released from domination and subordination by finance and the big “tech” firms that have grown in size and control by riding on the boom. It also requires regulating both the evolution of the technology and its deployment, given the possibility that it can significantly change the way humans interact with each other and the world they inhabit. Leaving the development and deployment of a technology that can have far-reaching consequences to a spontaneous and uncontrolled “learning” process necessitated by the desire to extract huge profits in the short run is a recipe for disaster.

 

 

Photo by Google DeepMind

Disqus comments