Governing AI: How we become Custodians of the Code

Markus H.-P. Müller on why social literacy, democratic values, and ethical reasoning will become future-proof skills.
Current discussion over the future impact of artificial intelligence (AI) focuses on its potential impact on employment and AI providers’ role in controlling it. But we also need to consider its implications for our management of knowledge itself.
Twenty years ago, when the internet started to become a dominant factor in our lives, its impact was sometimes compared to that of the invention of the printing press in the 15th century— an event which led to a revolution in access to knowledge, and a thus provided a lynchpin of the original Renaissance.
AI, of course, promises to go much further and deliver a systemic transformation not only in how knowledge is distributed, but also in how it is generated, interpreted, and legitimized. In this context, I would introduce the term “meta Renaissance” as a reminder that we are not merely inventing new tools—we are entering a new epistemic architecture (i.e. the network of systems we use to judge knowledge and its verification). One aspect of AI is how the source of meaning apparently shifts from human (individual and group) judgment to algorithmic synthesis.
Does the original Renaissance offer any guide as to society’s likely response to AI? The invention of the printing press started the process of dismantling the Church’s monopoly over scripture, interpretation, and education. But it also seeded new social imaginaries – the values, institutions and laws which individuals use to visualise, define and create their society. These include our perception of the value of individual reason, scientific institutions and other supporting frameworks such as constitutional law.
AI will both seed new social imaginaries and redefine existing ones, often without our active consent. AI’s ability to generate new knowledge and meaning, not just disseminate information, means that the focus will moves from who controls information to who governs the systems that generate, structure, and circulate epistemic authority.
This is not merely a technical evolution—it is a philosophical one. The philosopher Charles Taylor, amongst others, has reminded us that every society depends on a shared moral horizon or background (what he refers to as a “moral ontology”). This framework allows us to assess the moral judgements which we rely on for running a modern society (Taylor, 1989; 1995). That framework is however now being filtered, reframed, and increasingly rewritten by machine intelligence. Who (or what?) decides what is credible? Whose reasoning is amplified? How can we assess what counts as coherence when AI has already shaped our sense of plausibility?
In one sense, AI might be seen as a counter-Renaissance. Individuals are being conditioned to act differently by actors who own the AI system’s architecture: in the pre-Renaissance world, God. But instead of a return to organised religion, we are now confronted with something more akin to a dystopian novel or film. Consider, for example, the Architect in the film the The Matrix Reloaded: a calm, all-knowing designer who doesn’t command people—but defines the rules of the world they inhabit. Today, the Architect may be the model trainer, the API designer, the platform owner. The “black box” nature of much AI – in that only the AI provider will know exactly how it works – would appear to make the threat to what we consider scientific method and objectivity even more unsettling.
As a society we therefore need to take the debate to the next meta level: starting to think, in an abstract way, about the governance of the meaning systems themselves. Consider three aspects of this governance problem. What becomes of science when hypothesis generation, data analysis, and even peer review are partially automated? What becomes of education when facts are abundant but frameworks are rare or disputed? And what becomes of work when repetitive cognitive tasks are displaced by AI and sophisticated expertise is increasingly available on demand?
In this early stage of the AI transformation, we cannot not know the answers to these three difficult questions. But, as regards employment, I think we should remember that AI does not replace humans. It replaces tasks. And the demand for human strengths—creativity, empathy, ethical judgment, and narrative framing—as we try to find answers to difficult issues around AI governance. New roles will emerge: interpreters of AI output, facilitators of human–machine collaboration, ethicists, auditors, institutional designers. As regards science, governance will have multiple functions if we are concerned about the risk of “black box” approaches in effect privatizing one of humanity’s greatest public goods. Who, for example, validates findings? Who ensures reproducibility? Who guards against the silent bias of data-driven discovery? In education, we will not be concerned just about how AI can help us to establish and transmit facts— we will want it to help us cultivate frameworks: how to question, how to connect, how to decide? Across all three areas, social literacy, democratic values, and ethical reasoning will become future-proof skills.
One aspect of the original 15th century Renaissance was that it rediscovered, and then drew on, the work of authors from the classical world, for example the pandects on Roman law. These helped provide, for the limited number of geographies involved in the Renaissance, an intellectually-coherent background. Establishing such a common background, when the whole world is simultaneously engaged in the AI transition will be more difficult. Existing methods for measuring the desirability of social and governance issues (as in existing ESG investment frameworks) may serve as a guide, but are unlikely to be sufficient.
We are no longer just users of tools. We are becoming co-architects of systems that shape behaviour, belief, and belonging. That responsibility cannot be outsourced. As I’ve argued in earlier work (Müller, 2021a; 2021b), this means that governance is no longer the exclusive domain of lawyers, ethicists, or engineers. It has become a civic skill. Our role (as individuals and as governments) is to define direction—not just what is technically possible, but what is socially desirable. We must build institutions that can audit, correct, and realign these systems to human purposes. In short, we must become custodians of the code—not its subjects.
Markus is Global Head of the Chief Investment Office of Deutsche Bank Private Bank and in June 2022 he also took on the role of Chief Investment Officer Sustainability.
Photo by Google DeepMind
References
Conijn, R., Kahr, P., & Snijders, C. (2023). The effects of explanations in automated essay scoring systems on student trust and motivation. The Journal of Learning Analytics, 10(1), 37–53. [DOI not found]
Müller, M. H.-P. (2021a, July 8). Platform thinking – justice, competition and the time dimension. Global Policy. https://www.globalpolicyjournal.com/blog/08/07/2021/platform-thinking-justice-competition-and-time-dimension
Müller, M. H.-P. (2021b, November 30). Why we must debate the future now. Global Policy. https://www.globalpolicyjournal.com/blog/30/11/2021/why-we-must-debate-future-now
Müller, M. H.-P. (2023, May 22). Renaissance as a “cultural invariant”? Global Policy. https://www.globalpolicyjournal.com/blog/22/05/2023/renaissance-cultural-invariant
Taylor, C. (1989). Sources of the Self: The Making of the Modern Identity. Harvard University Press.
Taylor, C. (1995). Philosophical Arguments. Harvard University Press.