This article traces the evolution of the Internet from the 1990s to the 2020s and compares it with the development of Artificial Intelligence (AI), particularly following the public launch of ChatGPT in late 2022. It identifies both parallels and divergencies between these two overlapping technological domains, focusing on the growing integration of AI into online applications. The central argument is that, whereas users once played a decisive role in shaping the Internet's trajectory and politics, the contemporary AI landscape is dominated by two principal stakeholders: governments and private companies. User groups, by contrast, have become largely marginalized in debates on AI policy and regulation. While users in cyberspace historically acted as co-creators, they are thus far confined to the role of consumers in relation to AI. This shift toward elite-driven governance raises important concerns regarding the emergence of authoritarian tendencies in the development and implementation of AI and digital technologies in general.
Policy implications
For governments:
- Transparency and regulation. Strengthen rules on data protection, algorithmic transparency, and consumer privacy, for example, building on GDPR and the EU's AI Act (cf. Bjørlo et al. 2021). Regulators should require companies to explain how models are trained, what data is used, and for what purposes.
- Education and capacity-building. Integrate AI literacy into school curricula and teacher training, ensuring that younger generations develop the skills to use and critically assess AI systems. This should be treated as a matter of national education policy.
- Support for open-source ecosystems. Provide funding and infrastructure for open-source Ai initiatives to reduce dependence on a few dominant players.
For the tech industry:
- Responsible innovation. Commit to ethical guidelines on fairness, explainability, and accountability of AI design. Companies must actively audit their systems for bias and be accountable for errors.
- Collaboration with regulators. Engage constructively with governments and international bodies to co-develop standards, rather than lobbying to block regulation.
- Open access initiatives. Contribute to open-sources models (as in Meta's LlaMa) to broaden access to AI tools for research, education, and civil society.
For user groups and civil society organizations:
- Advocacy and awareness. Mobilize to ensure that citizens have a voice in AI governance debates, whether through NGOs, online forums, or international platforms such as UNESCO's AI Observatory.
- Trust-building. Share experiences and outcomes of AI use and feed this knowledge back into governance debates to keep them grounded in public concerns.
- Empowerment through literacy. Promote community-level training and public awareness campaigns to help individuals build confidence in engaging with AI technologies.
- Data rights activism. Press for clearer rules on data ownership, ensuring individuals can know and, where possible, control how their data is used.
Photo by Google DeepMind