Artificial Intelligence: decoupling growth from planetary limits?

Markus H.-P. Müller on how AI might allow economic growth to be combined with lower global natural resource use.
Economic growth has historically made ever-increasing demands on our natural resources. But could artificial intelligence (AI) help us break this link? AI promises not only to make current ways we use resources more productive, but also to fundamentally shift the nature of economic value creation. In this article I discuss possible AI policy frameworks to encourage such decoupling.
One immediate and already visible impact of AI has been the building of very large data centres for its deployment, together with the power generation facilities and networks necessary to supply them. As a result, AI is currently seen as bad for global sustainability. But perhaps we shouldn’t be so negative: over the longer term, AI could still become a decisive lever for decoupling economic growth from resource use and thus avoid us breaching the physical limits of our planet.
Economic literature distinguishes between two forms of such decoupling: so-called relative decoupling (where environmental pressures grow more slowly than economic output) and absolute decoupling (in which environmental pressures decline in absolute terms while economic output continues to rise).
The problem is that, so far, while many advanced economies have achieved relative decoupling in certain sectors- most visibly in energy intensity per unit of GDP – this study shows how sustained absolute decoupling at the global level remains elusive. And, with resource pressures already high, we need absolute decoupling to ensure continued and/or faster levels of economic growth.
In theory, AI might let us achieve this through improving resource productivity via optimising industrial, agriculture and supply chain processes. The worry remains, however, that the positive environmental effects of improved resource productivity will be quickly swamped by the resulting higher levels of economic growth.
In a previous GPJ post I talked about the Jevons paradox, named after a 19th century British economist who noted that more energy-efficient steam engines were increasing overall consumption of coal, not reducing it. AI itself currently provides a super-charged example of the Jevons paradox in action: falling costs of accessing AI are resulting in much greater use of it (and supporting physical resources). Apply the Jevons paradox over the whole economy and the risk is that AI-assisted growth could intensify sustainability concerns, not reduce them.
AI can probably only decouple economic growth from planetary boundaries if it results in radically new business models capable of dematerializing value creation. Such models could include virtual services, telepresence, sharing platforms and other services that reduce the need for new material goods.
Can we, as societies, steer use of AI towards such dematerialization? At present, this is clearly not a priority. Instead, as with previous new technologies, the focus of firms and national governments is on securing or restricting access to the inputs necessary to supply AI. Note in this context the evolving struggle over US restrictions on its integrated circuit exports and Chinese controls over its rare earth resources. This reminds us of a second paradox: that those AI technologies which promise to help free us from physical constraints on productivity and output are themselves still heavily dependent on extracted, generated or manufactured inputs.
The debate over AI is constantly evolving. Calls for the limitation of the functional capabilities of AI (quite common 12 months ago) are currently less frequent. However, given the desire to secure AI inputs, we already face an opportunistic situation where AI and/or its inputs are being used by governments as a lever for other economic and foreign policy aims. Such opportunism means that multilateral or legal rules appear to count for little: commentators are already using the term “AI colonialism” to describe the involuntary nature of some of relationships involved.
Such opportunistic management and leverage of AI will not be sustainable over the longer term. I have already talked about the need for better management of AI governance, seen from a social and cultural perspective. But I think that we will also need closer steering of AI from a narrow economic perspective, and soon.
But can policymakers ever catch up with the galloping AI horse? The complexity and speed of AI development will continue to make it difficult for governments put a bridle on this horse and few appear willing to risk losing the AI race through pulling in the reins with over-regulation.
But, as I have argued before, governments can and should use policy in a way that is anticipative of change, providing guidance and development but also continuously identifying likely future developments. This is already happening: governments are already experimenting with sandboxes, dynamic standards, algorithmic audits and some are proposing AI regulatory agencies that would be similar in some ways to financial supervisors.
In the near term, we should expect a patchwork of regulation which could prove reasonably effective: transparency and liability rules, for example, could shift incentives and thus AI suppliers’ behaviour even before hard law is fully established. Over the longer term, efficient regulation will depend on whether a global baseline emerges in an industry which does not respect borders: if not, regulatory arbitrage is likely with firms looking for lower-regulated regimes. In summary, the speed of AI development means that regulation will never be perfectly aligned with the present – but this does not mean that we should simply accept that no regulation is possible.
AI policy will need to happen at multiple levels. We can already see an emerging patchwork of agreements between AI tech firms and local governments to supply AI locations and energy supplies. But managing the effects of AI in future will also involve increasingly acute and expensive distributive policy decisions (e.g. on employment support) that will need to be made by national governments. Populations will expect their governments to deliver social and economic policy solutions to AI-related welfare issues quickly.
Will absolute decoupling of economic growth from resource use be a priority in this situation? On the face of it, no: policy solutions will be needed at time when many economies’ fiscal positions are already under strain and individual governments will be under immediate pressure to reduce spending and increase revenues, with sustainability issues moved towards the back of the queue.
However, I am guardedly optimistic that decoupling will not be pushed off the policy agenda. National governments will have get greater control over AI and its suppliers to resolve these distributive policy issues and, ultimately, should find some support from other countries and global financial markets to do this (particularly if the alternative is widespread national debt defaults). Reaching an agreement on AI taxation (e.g. via a digital services tax) could be part of creating a much broader network of international policy frameworks, regulatory tools and market incentives that can then be used to help align AI deployment with social and environmental goals. By that stage, sadly, further environmental degradation is likely to have underlined the importance of making absolute decoupling of economic growth and resource use a top policy priority.
Markus is Global Head of the Chief Investment Office and Chief Investment Officer Sustainability at Deutsche Bank Private Bank.
Photo by Google DeepMind