Achieving Global AI Governance: Trustworthy AI and Lessons from the 3GPP Model
Bruno Galmar discusses global AI governance, drawing insights from the 3rd Generation Partnership Project (3GPP) in mobile telephony. As a young telecommunications engineer two decades ago, he witnessed how 3GPP specifications successfully enabled the seamless deployment of wireless technology, though the unforeseen societal issue of mobile phone addiction emerged later. Now an academic, he warns that the deployment of generative AI, while driving economic growth, could introduce similar societal risks, like addiction. He cautions that contemporary Trustworthy AI summits inadequately address these risks.
One of the five main topics of the forthcoming Paris AI Action summit in February 2025 is advancing global governance of AI. On October 22, 2024, I participated in Seoul in an event titled Transformation Challenges of Education in the Age of AI, whose aim was to gather ideas in preparation for the Paris summit. I presented some of France’s ideas regarding global governance of AI in education and discussed ethical issues of AI in education.
Before delving into the specific topic of governance of AI for education, I introduced President Macron’s high-level vision for global governance of AI by quoting his speech of May 21, 2024, to a large gathering of French AI talents: “La seule bonne gouvernance pour moi est mondiale” which translates as: “The only good governance for me is global”. This statement echoes Recommendation 22 “Structure a coherent and concrete diplomatic initiative aimed at founding a global governance of AI” of the French AI commission report published in March 2024.
This report, which includes 25 recommendations, outlines France’s necessary actions to fulfill its ambition of taking a prominent leadership role in AI. France aimed at being proactive in establishing a worldwide organization in charge of regulating AI. This organization would bring together representatives of governments, companies and civil society. It would focus on the “3S”: science, solutions and standards. It would host scientific discussions regarding AI research and relevant issues, propose open solutions for operating and evaluating AI models, and finally define AI standards.
Regarding its goal, France has allies: the 27 other countries in the Global Partnership on Artificial Intelligence (GPAI) and also China, as demonstrated by France and China’s joint statement on AI governance issued on May 6, 2024. Interestingly, Arthur Mensch, co-founder and CEO of the prominent French AI start-up Mistral AI sounded measured in his statements regarding the feasibility of achieving global governance of AI (see video at 28:11): “Historiquement, c’est pas tout à fait facile d’instaurer une gouvernance mondiale.”, which translated as “Historically, it is not that easy to establish global governance.” However, he then cited the case of open-source governance in software as a successful example of global governance in technology.
I would like to offer another example of global governance of technology, inspired by my prior professional experience. From 2003 to 2005, I worked as a junior telecommunications engineer on 3G phones in the mobile telephony industry. Part of my job involved reading the 3GPP (3rd Generation Partnership Project) specifications regarding 3G technology. Created in 1998, the 3GPP remains the main global entity responsible for specifying the standards of mobile telephony. It is composed of two main groups: the Organizational Partners, a gathering of seven Standardization Bodies (ARIB and TTC for Japan, ATIS for the USA, CCSA for China, ETSI for Europe, TSDSI for India and TTA for Korea) – and the Market Representation Partners, the representatives of the business involved in mobile telephony industry, such as mobile operators, handset manufacturers and network infrastructure companies. Given the ubiquitous deployment of mobile phones in our society, the 3GPP can be seen as a model example of a partnership project that has successfully imposed and regulated a new technology at the worldwide level.
If the goal of AI governance is to achieve a similar worldwide technological deployment of AI technology, why not follow the 3GPP example?
Given the close connection between the AI sector and the mobile telephony industry, 3GPP’s Organizational Partners could take the lead in specific technical standardization – although some Al standards already exist, defined by the ISO/IEC JTC 1/SC 42 working group. The main obstacle for a similar deployment lies in the widespread concerns among some AI scientists, governments and civil society regarding the risks associated with generative AI.
During the deployment of 2G, 2.5G and 3G mobile phone technology, the primary concerns were about potential health hazards from the electromagnetic waves emitted by phones and the network base stations. There were no discussions about any potential harmful societal or ethical risks posed by 3G technology. As we know, safety guidelines and standards were established to address health concerns, and the technology was deployed with success.
Nowadays, mobile phone addiction – Problematic Smartphone Use (PSU) in the scientific literature – is recognized as a serious societal problem. As a teacher in higher education for more than a decade in Taiwan, I witness addictive behaviors in students related to their use of smartphones that did not exist 15 years ago. As a young engineer in this industry more than 20 years ago, I never anticipated such a widespread addiction to this technology.
At that time, I was reading technical documents about seamless integration of wireless services to improve the user experience – for example, being able to watch a video smoothly while switching from the cellular network to indoor networks. I did not foresee that improving user experience could inadvertently contribute to increasing addiction. I do not recall reading documents assessing potential societal risks. Hence, we could say mobile phone technology was standardized globally through the technical frameworks provided by 3GPP without real attention to potential societal problems that its ubiquity and widespread adoption might cause. However, it is crucial to acknowledge that the responsibility for addressing addiction is not solely on the mobile telephony industry. It should be a shared responsibility involving governments, civil society, service providers, and the mobile telephony industry.
Following 3GPP’s model of governance for AI with minimal inputs from civil society could certainly result in additional societal problems that some AI scientists have already envisioned. Fortunately, the emerging path to global governance of AI seems rather radically different, with risks and ethical issues at the forefront of considerations. Aiming at a global governance of Trustworthy AI will definitively minimize societal harmful effects of AI and avoid repeating mistakes from the past, as seen with the case of the mobile telephony sector. If successful, it could even have a positive side effect, setting a role model for global governance of other technologies. At the core of the Trustworthy AI agenda are crucial issues for which governance should safeguard the rights of citizens: privacy, transparency, safety, security and nondiscrimination. However, it seems that minimizing the risks of introducing new addictive behaviors is not really addressed, though it is crucial for educators. Hopefully, this issue will be considered in future AI governance summits.
The speed and ubiquity of deployment of AI technology are at risk of being slowed down if risks and ethical concerns have to be fully addressed. This could lead some countries to hesitate in entirely supporting a global governance framework centered on Trustworthy AI. Given that AI is seen as a key driver of economic growth, and competition will be fierce in this sector, regulations of AI applications in industrial and other key economic sectors are likely to be lenient. Finding the middle way between AI-driven national economic growth and a global governance of Trustworthy AI is the challenge all the AI summits should address.
Bruno Galmar is an associate professor in the French Department at National Central University in Taiwan. He teaches introductory AI to non-science students, and his research topics include AI education, computational thinking, and science literacy. He previously worked as a telecommunications engineer in France.
Conflict of interest - I was invited by the French Embassy in Korea to give two presentations related to AI governance in education at the event Challenges of Education in the Age of AI in Seoul on October 22, 2024. This letter of opinion remains neutral regarding France's vision of AI governance, and no financial or material support was received for the writing of this letter of opinion.
Photo by Google DeepMind