AI and ESG: two sides of the same policy coin?

AI and ESG: two sides of the same policy coin?

Markus H.-P. Müller argues that when it comes to AI and ESG we need to get used to living in a state of policy flux on both issues.

A question I am often asked, because I work in financial services, goes as follows: why is artificial intelligence (AI) so positively seen by investors, at a time when many are increasingly skeptical about ESG?

A variant of this question is also of relevance for individuals and policymakers. Why are we still relatively relaxed about the imminent economic disruption from AI, but fret about the relatively minor impact of ESG on our daily lives?

The accompanying “narratives” to AI and ESG are certainly very different. AI is associated, for many investors and individuals, with new and dynamic forms of activity and economic growth. By contrast, ESG (rightly or wrongly) is seen as imposing restrictions on some economic activities and growth.

In the case of AI, the positive narrative at the moment is very much investor-led. Investors have been attracted by the prospect of rapid gains, driven by well-reported growth in relatively few easily-identifiable and publicly-listed companies. The impact of technological advances can be demonstrated in various ways, including via the distribution of “free” AI software (e.g. ChatGPT) to individuals’ smartphones and computers.  Longer-term social concerns are obscured.  Investors will be particularly important in determining popular attitudes to ESG in countries such as the U.S., where much of the population has transparent direct or indirect (e.g. pensions) exposure to stock markets.

By contrast, the investor narrative around ESG is complex and mixed. We know that there is considerable innovation but this is across multiple sectors and may be less obviously visible. Firms involved may also be less in the public eye, due either to size/private ownership, or because of geographic location. (The dominance of China in many forms of renewable energy technology does not aid transparency.) Firm size and market structures can make firms vulnerable to macroeconomic pressures, and ESG investors in many sectors have not had an easy ride in recent years.

Will popular perceptions around AI stay generally positive? Perhaps for some time. Attitudes towards AI benefit from the fact that future social and economic impacts are still hard-to-define.  Potential problems with AI implementation are also very much kept behind closed corporate doors, with the notable exception of the failure of self-driving cars.  Increased use of AI can also be presented as part of the unstoppable march of technological progress – unavoidable, and no one’s responsibility.

ESG faces more of a struggle. While many are in favour of environmental measures in principle, specific environmental initiatives can face considerable opposition. Recent examples of controversial policies in Europe would include higher pay-per-use charges on more polluting cars and forced upgrading of domestic heating systems to less carbon-emitting alternatives.  These can result in immediately higher costs for individuals, with the burden often falling disproportionately on economically disadvantaged members of society. By contrast, benefits from ESG tend to be longer-term and with economic impacts that are more difficult to see, such as the implications of pollution for health.

I wonder how long this distinction between AI and ESG will hold. As noted above, ESG can suffer from being already under the spotlight. The debate over what does or does not make sense in environmental terms is also very open and can become a vehicle for other sorts of political dissatisfaction. Accusations of “paternalism” are easy to make, with ESG seen as interfering in how we live, travel and eat.  In these disputes, it is too easy to forget that ESG, done correctly, can find new ways of doing things in a limited world, and can thus deliver growth as well.

AI will, sooner rather than later, come under the spotlight too. The questions it poses to policymakers will be equally difficult to those demanded by ESG. There will be a growing realization that AI, too, has costs – particularly if it results in temporary or sustained unemployment, and governments pick up the tab.  At some point, of course, investor enthusiasm for AI will also be challenged, forcing a more nuanced interpretation of its pluses and minuses.  

Both ESG and AI will force policymakers to reassess many old truths, at both a macro and micro level.  The debates around ESG and AI are in some ways just two sides of the same coin: how do we deal, socially and politically, with rapid structural change?

There is a sense here of opening a policy Pandora’s Box, with uncertain results that will be both positive and negative. Let me give one example: the concept of “Baumol’s cost disease” which stems from 1960s study by U.S. economists William Baumol and William Bowen. The argument is that in an economy where some sectors are increasing productivity and thus wages, less-productive sectors must hike wages too as they compete for workers. This has led to a long-standing belief that the costs of providing government services (such as education, policing and healthcare) will rise remorselessly. But does this belief make sense anymore if AI can radically improve productivity in public services? Other historical policy “truths” around intellectual property rights, taxation and many other areas may also need to be re-examined.

Some historical lessons are likely to hold, however. One lesson from the 19th and 20th centuries is that, while legislation to protect workers’ rights can create immediate pushback in the short-term, it does not stop long-term economic growth. A related lesson, again looking at the long term, is that legislation tends to be cumulative and, even if it can be reversed, its effects often can’t be. This is already evident in ESG, where the impact of regulation has spread into more and more areas – unsurprisingly, given the interconnectedness of the global economy. Firms cannot now afford to ignore the impact of nature-related or other risks (Figure 1), not least because investors must consider their possible implication on a company’s share price.

Figure 1: Framework for identifying nature-related financial risks

Figure%201.jpg

Source: University of Cambridge Institute for Sustainable Leadership, Handbook for Nature-related Financial Risks

Will this happen with AI? Investors will also need to learn how to factor in risks to AI – from policy or social disruption, rather than nature – into their calculations. This process will get under way: the European Parliament gave final approval to its Artificial Intelligence Act in March this year, with the European Council expected to formally endorse it in May. This is the first law of its type in the world, and could become a benchmark for regulation globally.  The legislation’s focus is on getting AI models and systems to comply with transparency and copyright issues, for example around generative AI.  But many questions remain, not only around privacy concerns, and this should be seen as just the start of AI legislation, not the final statement. Expect much more debate on AI ahead, usually running on separate tracks from ESG regulation, but with some points of cross-over (e.g. around the energy use of server farms).

In a period that policymakers face multiple concurrent challenges, and not just from AI and ESG, solid institutional structures will of course be particularly important. But I think that it may be more important to open the debate up than try to contain it.  Operating within established free-market structures, we can still need to make multiple choices about where we go from here. We need to reassess assumed policy relationships and also reject false binaries which posit we can only make simple choices between one thing and another, or expect one trend to have only one outcome.

What this means, for both AI and ESG, is that we may need to get used to policy decisions made on the basis of limited knowledge. For ESG, this may be due to changing scientific understanding of what is going on in the environment and what we should do about it. For AI, policy adjustment may be driven at least initially by a changing understanding of what AI can do and how it will disrupt how we live.  As we noted above, ESG and AI are both essentially about how we direct and manage major change. Eventually, we will find more systems to guide how we do this, but they are likely to remain complex and disputed: I don’t see any approach replicating the dominant role held, for example, by cost-benefit analysis in public spending in the second half of the twentieth century.

In conclusion, I think the current discrepancy between investor attitudes to AI and ESG will narrow – but only because the challenges posed by AI will become more apparent. Questioning of both AI and ESG is likely to get more intense – and this is necessary and desirable, because both pose major questions about how we will live in future.  Policymakers need to see such questioning as constructive rather than a problem that needs to be shut down: we need to get used to living in a state of policy flux on both issues.

 

 

Markus H.-P. Müller is Global Head of the Chief Investment Office International Private Bank, Deutsche Bank AG. Markus has held teaching posts in corporate finance and economics, being a visiting scholar at the Frankfurt School of Finance and the University of Bayreuth as well as at the Banking and Finance Academy of the Republic of Uzbekistan in Tashkent. His main research interests lie in the structural transformation of economies and societies as well as in the area of ​​sustainability. Markus authored several books and articles on the transformation of society and economies.

Photo by Merlin Lightpainting

Disqus comments