Contextualizing China's AI Governance
Jing Cheng argues that the effective application of AI for governance needs to take local and national contexts into consideration. This is the seventh post in a EGG commentary series exploring how AI’s development is affecting economic, social and political decision-making around the world.
Artificial intelligence (AI), perceived as a revolutionizing force, has drawn mounting attention worldwide. China is noticeably striving for AI leadership, transitioning from “a norm-taker towards a norm-shaper, if not maker”. China’s approach to AI governance is often regarded as starkly different from the European and American approaches, with many regarding it as state-led and “the vanguard of digital authoritarianism” and highlighting the AI race between China and US. Here, I suggest a more nuanced approach to discussing China’s AI governance policies and practices, contextualizing these within the Chinese sociocultural context, and identifying some challenges ahead. I argue that regardless of differences, a contextualized, inclusive framework – one that considers the domestic dynamics and brings stakeholders and countries together – is beneficial for building a common digital future.
AI governance policies and practices
China's aspirations to become a great power in science and technology are closely tied to its historical experience about technological backwardness, often quoted in the discourse as “the backward will be beaten.” China consequently aspires to develop science and technology to transform the country into a sci-tech powerhouse. In the digital realm, China has invested substantially in digitalization and modernization. AI as an emerging technology offers China an opportunity to potentially lead the world in AI theories, technologies and applications.
Given that AI is ubiquitous in its applications, ranging from recommendation and navigation to general-purpose chatbots, there are many regulatory needs to monitor and regulate AI. In China, a national AI plan was issued in 2017, and it set goals for achieving an AI governance system by 2030. Since then, various documents have been released, including China’s Civil Code (2020), Personal Information Protection Law (2021) and Ethical Norms for New Generation AI (2021). China has recently stepped up to propose draft measures to regulate the use of generative AI. These regulations – often issued by different AI-related bodies – reflect China’s strong impulse to regulate the privacy, data security and ethics aspects of AI.
China has also published guidelines and rules for AI standard setting. These include Guidelines for the Construction of a National New Generation Artificial Intelligence Standards System (2020) and Artificial Intelligence Standardization White Paper (2021). There are different levels of standardization at the national, industrial and enterprise levels. This is in part a reflection of the Chinese AI landscape, in which multiple stakeholders at different levels get involved in the agenda-setting and decision-making process to regulate AI.
Another notable aspect is governance by AI, an issue of importance to the Chinese government. Good governance (shanzhi) is increasingly associated with “smart government” (zhihui zhengfu), highlighting the use of technology in providing better public services and supporting better planning and decision making. Studies show that responsive e-government and providing convenient and efficient services are positively received by the public. In China, AI applications and facilities are promoted in public service systems, such as traffic and court systems, as a means of optimizing public services for citizens. One example involves a series of smart city development and governance projects carried out under partnerships between the government and tech giants such as Baidu, Alibaba and Tencent. The Hangzhou government and Alibaba have jointly built the City Brain system of urban traffic management. It helps city administrators analyze live streams of traffic and improve the incident identification accuracy rate, and it has been rolled out in at least 23 cities inside China and internationally, including in Kuala Lumpur in Malaysia.
The process of AI governance in China is not only top-down and based on a monolithic government masterplan, as is often assumed, but also involves interactions between the state and digital companies as well as other stakeholders, such as universities, research institutions and non-governmental organizations. Private digital companies make up national AI teams that promote AI innovation and shape the societal ecosystem. Provincial and municipal governments also try to connect multiple stakeholders for AI governance promotion and implementation. Peking University's Institute for Artificial Intelligence and the Optics Valley in Wuhan, for instance, have pledged to jointly build a smart social governance trial base, working with eight companies to integrate AI industries, education, research and applications.
It should also be noted that some AI governance principles being promoted in China stem from academia and industry, not necessarily from the government. An example is the Beijing AI Principles, which were released in May 2019 under the joint collaboration of the Beijing Academy of Artificial Intelligence, leading Chinese universities such as Tsinghua University, Peking University, the Chinese Academy of Sciences, and the Artificial Intelligence Industry Alliance (AIIA). In this evolving digital landscape, China’s AI governance is and will continue to be shaped by a variety of stakeholders, including central and local governments, digital companies and academia, and their interactions.
Sociocultural context matters
Although technology itself is neutral and objective, at least theoretically, effective governance of AI and effective application of AI for governance needs to take local and national contexts into consideration, such as social norms and cultural traits. This is especially so when it comes to AI ethics.
In China, the often vague, abstract slogans in AI principles reflect Chinese philosophical and cultural practices applied to AI ethics. For example, the “Beijing AI Principles” call for healthy development of AI, highlighting the importance of harmony and cooperation “so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of ‘Optimizing Symbiosis’”. Optimizing symbiosis stems from the Confucian philosophy of harmony, underlining the harmonious existence among people and the symbiotic relationship between humanity and the environment, in this case the machine. Similarly, in Tencent’s AI Principles, the concepts of “Tech for Good” and “digital well-being” (shuzi fuzhi) also highlight such a human-machine symbiosis, exploring “the balance between AI, individuals and society.” In a global landscape of human-centred AI ethics, such philosophical understandings should be noted as an important context in which AI initiatives and principles are embedded locally and nationally.
Examples of how Chinese culture is reflected in China’s approach to AI governance can be found in many AI-related documents, issued by industry or government organizations. One example is the draft Joint Pledge on Artificial Intelligence Industry Self-Discipline, released by AIIA. This document pledges to implement self-discipline and industry supervision mechanisms for AI ethics. In the AI Industry Responsibility Declaration, leading digital companies such as Baidu, Huawei and Ant Group jointly commit to pay great attention to the issue of social responsibility and to implement “self-discipline and self-governance” in AI. The Joint Pledge on Internet Information Service Algorithmic Application Self-Discipline, which is widely supported by 105 entities including industrial alliances, top digital companies and media, also attaches great importance to responsibility and self-discipline.
“Self-discipline” and “self-governance” have been highlighted as means of integrating ethical principles into all aspects of AI practices. The call for self-discipline in AI governance, also stated in the subsequent national Ethical Norms for the New Generation Artificial Intelligence, is associated with the Chinese cultural tradition of self-cultivation and humanism. It draws on the Chinese philosophical theme of the prominent role played by the individual in social development and the inner transformation of oneself for better morality and governance in an ideal social system. In these Chinese approaches to AI governance, the concept of self-discipline and self-governance assign significant responsibilities for governing AI to governmental bodies, individuals and especially corporate actors.
As is evident, multiple stakeholders across China are shaping the emerging AI governance regime in China. China’s AI governance nonetheless faces several challenges at the national level. One major challenge is the fragmentation of governance, with different layers of regulation and different bodies for AI governance. China has five levels of government administration. Different governmental bodies and digital actors, including tech giants and startups, have their own preferences and interests in promoting AI regulations, leading to internal struggles for resources, publicity, and influence. The landscape of China’s domestic governance of AI is liable to be as fragmented and decentralised as that of China’s BRI projects.
Another challenge is the centrality of abstract philosophical concepts such as harmony and self-discipline as applied to AI regulation in China. The vagueness complicates AI ethics implementation and makes supervisory oversight challenging, whether self-governed at the corporate or individual levels or directed by the government. When pledges and guidelines are made one after another, abstract notions of self-discipline and self-governance would turn out to be ineffective for the regulation of AI. The ambiguity creates a situation where follow-up measures and monitoring are needed to ensure proper implementation. Otherwise the documents are more likely to produce empty talk than real effects.
An additional challenge arises from the interdisciplinary nature of AI itself. Although it is often stated clearly that an interdisciplinary perspective and joint efforts are essential, AI governance efforts in China often face difficulties when trying to implement interdisciplinarity and diversity in practice. Science and technology are highly valued in China and play a crucial consultation role in AI governance initiatives. Yet the perspectives from humanities and social sciences are also urgently needed in the formation of an effective and ethical AI governance framework for China. Relevant expertise is still relatively scarce in the AI governance discussions. Bringing the state, industry and all relevant branches of academia together to foster open and constructive discussion would help further improve the effectiveness of AI governance regime in China.
The conditions for AI governance, globally, are still evolving and far from clear. One area where China could make a positive contribution is in addressing existing inequalities at the global level, which have been further amplified in the age of AI. The Chinese government states that one goal of the digital transformation in China is to help tackle the digital divide issue worldwide, between the haves and have-nots. China could promote and support addressing under-representation in global discussions by focusing on groups that are most vulnerable, such as small and medium enterprises and technologically disadvantaged groups such as women and the elderly, at home.
Another area where China could contribute more is global cooperation. Despite global awareness of the emergent governance issues, a major global challenge is that geopolitical competition is growing over values attached to AI, digitalization, and the use and governance of advanced technology. While technology is supposed to be neutral in an ideal scenario, humans nonetheless determine the functioning of the machine. Under the current international scenario, technology is increasingly treated as value-laden, and made into a geopolitical battlefield of value competition between technologically advantaged countries. Such competition further widens division and inflicts more problems. Some countries and groups – the technologically disadvantaged – are confronted with the dilemma of taking sides. This situation hinders the development of global governance of AI and could result in a dangerous global situation in which the lack of consensus and coordination in regulating AI leads more pressing AI ethical considerations that could affect sustained human life to be put aside.
More global cooperation on AI governance is needed, more inclusive frameworks are needed, as are joined efforts involving different segments of society and between nations. To do so, it is not helpful to simply bring “like-minded nations” together; it is rather more constructive for nations and groups to engage and interact, seeking common ground for AI governance while reserving differences in values.
Dr. Jing Cheng is a lecturer in the School of Foreign Studies and Deputy Director of Belt and Road Communication Research Centre at Xidian University. Her research interests lie in the field of national identity, Internet politics and international communication, and she is currently focusing on artificial intelligence and digital governance. She has published articles in journals such as Journal of Contemporary China and Journal of Asian and African Studies and also written op-ed articles for China Daily and The Conversation, among others.
Photo by Saunak Shah