Artificial Intelligence: Markets, gurus and governments

By Alfredo Toro Hardy - 12 December 2023
Artificial Intelligence: Markets, gurus and governments

In this article Alfredo Toro Hardy provides context to the recently agreed upon and soon to be enacted EU AI Act.

A report by The New York Times depicted a conversation on the future of Artificial Intelligence (AI), held by Elon Musk and Google’s co-founder Larry Page, on the occasion of Musk’s 44th birthday in 2015. About thirty people witnessed their chat which, as the arguments flew, began to heat up. While Musk expressed his fears that an uncontrolled AI could end up destroying humanity, Page insisted in his vision of a digital utopia where humans and AI would eventually merge. Rasp with frustration at Musk’s admonitions, Page called him a “specieist”, meaning a person who favors humans over the digital life-forms of the future. That, of course, was supposed to be a disdainful spurn (Metz, Weise, Grant and Isaac, 2023).

Duelling visions

The above discussion should be related to the events inside OpenAI, last November, where a confrontation between two dueling visions of AI took place. One vision, represented by the company’s board of directors, saw AI as a potential “leviathan being summoned from the mathematical depths of neural networks”. That is, as something akin to an alien life form that had to be restrained and deployed with extreme care, so as to avoid it taking over humankind and destroying it. The other vision, personified by Sam Altman the company’s CEO, saw AI as a transformative and money-making tool that had to be deployed fast to thwart competition.

Alarmed by the lack of concern with the consequences that characterized Altman’s behavior, the directors fired him. In doing so, they ignited a fight that saw the totality of OpenAI’s employees, as well as its main stockholders, siding with the CEO. At the end, Altman was reinstated, and the members of the board were the ones fired and replaced. Their substitutes, in tune with the stockholders, were market-oriented people for whom the money-making argument prevailed (Roose, 2023).

Hence, technological gurus and market concerns seem to be pushing AI to new heights. This combination, however, represents the biggest threat in relation to an AI that could get out of human control. Markets, because they are incapable of defining or imposing limits. Gurus, because within the dichotomy humanity-technology, they tend to choose the latter.

Markets

Shortly before his passing, Henry Kissinger in a joint article with Graham Allison, delineated the main differences between nuclear weapons and AI. The first of them was that while governments were in charge of the development of said weapons, entrepreneurs and companies “are driving advances in AI (…) Furthermore, these companies are now locked in a gladiatorial struggle among themselves” (Kissinger and Allison, 2023).

The implication therein, is that while governments are naturally cautious in relation to nuclear weapons, caution is a difficult prescription when maximizing profit is involved. Within this race to out pass competitors, there is no time nor disposition to think of the consequences. Like sorcerers’ apprentices, competing AI companies can create havoc along their way.

Gurus

However, if a market-oriented AI is dangerous because of its speed and improvidence, technological gurus can be much worse. In the first case, AI can get out of human control by accident, whereas in the second one it could occur by design. Indeed, within technologists a sort of secular religion is firmly established: Dataism. According to Yuval Noah Harari:

“Dataism declares that the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing. This may strike you as some eccentric fringe notion, but in fact it has already converted most of the scientific establishment (…) Humans were supposed to distil data into information, information into knowledge, and knowledge into wisdom. However, Dataism believes that humans can no longer cope with the immense flow of data, hence they cannot distil data into information, let alone into knowledge or wisdom. The work of processing data should therefore be entrusted to electronic algorithms. In practice, this means that Dataists are skeptical about human knowledge and wisdom, and prefer to put their trust in Big Data and computer algorithms” (Harari, 2016, pp. 367, 368).

In other words, passing the commanding torch to AI. The so-called Transhumanist party in the U.S. is a good example of Dataism. Their quest is to have a robot as President of the United States within the next decade (Cordeiro, 2017).

Markets, thus, run with insouciance, while technological gurus await in opacity to liberate AI from the “jail” that has been imposed upon it by its programmers. The risk that would follow from AI’s extricating itself from human control, could be the destruction of the human race itself. This is what Stephen Hawking, one of the greatest scientists of our time, believed would result from the advent of dominance of AI. This is also what hundreds of top-level scientists and CEOs of high-tech companies (among which Elon Musk) believed, when, in May 2023, they signed an open letter warning about the risk to human subsistence involved in an uncontrolled AI. For them, the prospect herein involved was on par with those of a nuclear war or a devastating human pandemic.

Governments

The obvious question, thus, is what are governments and politicians doing? The main concern in this regard is that they would handle this subject with the same slowness and lack of determination that characterizes another human threatening area: Climate change. If that turns out to be the case, humanity will be doomed.

A good example of a weak gathering was the First AI Safety Summit, held in the iconic Bletchley Park, U.K., at the beginning of last November. With the attendance of 27 nations plus the European Union, it aimed at boosting international cooperation against the risks represented by this technology. Its result was a declaration plentiful of good intentions, but lacking teeth and specifics. On top, the so-called Bletchley Declaration does not have a binding character, leaving to the good will of its members compliance with its objectives. According to what was agreed, members should gather on semi-annual basis, which although being much better than annual or biannual meetings, could still represent an eternity for a technology advancing at exponential speed (Flores, 2023). Moreover:

“As such, the value of the declaration may be largely symbolic of political leaders’ awareness that AI poses serious challenges and opportunities and their preparedness to cooperate on appropriate action. But heavy lifting still needs to be done to translate the declaration’s values into effective regulation” (Tasioulas, Landemore and Shadbolt, 2023).

Certainly, not enough.

And what about the already agreed and soon to be enacted EU AI Act? The good news is that it will be the first major law worldwide to regulate AI. As such it will become a model for policymakers around the globe on how to put guardrails to this technology. This includes Washington and Beijing, which are currently working on drafting legislation to control this technology. The act will put restrictions on what are seen as AI’s riskiest uses. That is, AI applications with the greatest potential for human harm.

The bad news is that the 27-member bloc had been debating this bill for more than two years, which runs the risk of making it obsolete in several areas when enacted. This highlights the main problem with regulating a technology that moves immensely more rapidly than lawmakers’ capacity to address it: “Fast-moving and rapidly repurposable technology is of course hard to regulate, when not even companies building the technology are completely clear on how things will play out” (Satariano, 2023).

Hence, the two main rules that should be followed by governments and parliaments, when dealing with this exponentially moving and imminently unsettling technology, should be:

Deeds not words, and highly flexible norms and legislations allowing for a continuous catch up. Even though AI only currently cocnerns a small group of high-tech companies, it might turn out to be a more difficult issue to deal with than climate change itself, where 193 sovereign nations are involved.

 

 

Alfredo Toro Hardy, PhD, is a retired Venezuelan career diplomat, scholar and author. Former Ambassador to the U.S., U.K., Spain, Brazil, Ireland, Chile and Singapore. Author or co-author of thirty-six books on international affairs. Former Fulbright Scholar and Visiting Professor at Princeton and Brasilia universities. He is an Honorary Fellow of the Geneva School of Diplomacy and International Relations and a member of the Review Panel of the Rockefeller Foundation Bellagio Center.

Photo by Google DeepMind

 

 

References

Cordeiro, J.L. (2017). “En 2045 asistiremos a la muerte de la muerte”. Conversando con Gustavo Núñez, AECOC, noviembre.

Flores, A. (2023). “Declaración de Bletchley: La seguridad de la IA”, Raia Diplomática, 22 de noviembre.

Harari, Y.N. (2016). Homo Deus, New York: Haper Collins, 2016.

H.A. Kissinger, H. A. and Allison, G. (2023). “The Path to Arms Control”, Foreign Affairs, October 13.

Metz, C., Weise, K., Grant, N. and Isaac, M. (2023). “Ego, Fear and Money: How A.I. Fuse Was Lit”, The New York Times, December 3.

Roose, K. (2023). “A.I. Belongs to the Capitalists Now”, The New York Times, November 22.

Satariano, A. (2023). “Europeans Take a Major Step Toward Regulating A.I.”, The New York Times, June 14.

Tasioulas, J., Landemore, H., and Shadbolt N. (2023). “Bletcley declaration: international agreement on AI is a good start, but ordinary people need a say – not just elites”, The Conversation, November 7.

Disqus comments