The global governance of artificial intelligence (AI) depends on coordination among national governments, international organizations, and non-state actors. While existing research has mapped the institutional complexity of the emerging AI regime, public trust in the stakeholders involved remains underexplored. This study addresses this gap using parallel surveys in the United States and China, two leading AI powers locked in strategic rivalry. Results show that respondents in both countries express the highest levels of trust in their own government and the lowest in their geopolitical rival, with other actors such as the European Union, tech firms, and research institutes falling in between. These patterns reflect how geopolitical competition and intergroup dynamics shape public trust, posing challenges for inclusive and cooperative governance in contested global domains such as AI. At the same time, individuals who view AI as socially beneficial and who support international cooperation report higher trust across a broad set of actors, including rivals. These findings illuminate systematic patterns in public opinions that condition the political viability of global AI governance and suggest that narratives emphasizing shared benefits and collaboration may help bridge trust gaps.
Policy implications
- Acknowledge and address trust asymmetries: Policymakers should recognize that public trust in global AI governance is deeply shaped by national interests and geopolitical tensions. Inclusive governance frameworks must consider these asymmetries and proactively address public concerns over rival state participation.
 - Emphasize AI's shared benefits: Efforts to build international consensus should foreground narratives that highlight the collective societal gains of responsible AI governance. Framing AI cooperation as mutually beneficial can help reduce perceptions of zero-sum competition.
 - Promote civic education on AI and governance: Surface-level familiarity with AI does not consistently promote trust. Public education campaigns should go beyond technical literacy to explain the governance challenges and the role of international cooperation in managing AI risks.
 - Support international organizations in convening leadership: Multilateral institutions like the United Nations should continue to play a central role in shaping and coordinating global AI governance. Public support for an UN-led AI agency suggests a willingness to entrust governance to international bodies perceived as impartial.
 - Engage the public in deliberation: Public attitudes matter for the legitimacy of global governance frameworks. Governments and institutions should create forums for inclusive dialog, allowing citizens to voice concerns and learn about the complexities of global AI cooperation.
 - Tailor engagement strategies to national contexts: Trust patterns differ across countries. Policymakers should avoid one-size-fits-all approaches and adapt messaging and outreach strategies based on domestic attitudes toward AI and global collaboration.
 
Photo by Google DeepMind