Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines

Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines

Voluntary guidelines on ‘ethical practices’ have been the response by stakeholders to address the growing concern over harmful social consequences of artificial intelligence and digital technologies. Issued by dozens of actors from industry, government and professional associations, the guidelines are creating a consensus on core standards and principles for ethical design, development and deployment of artificial intelligence (AI). Using human rights principles (equality, participation and accountability) and attention to the right to privacy, this paper reviews 15 guidelines preselected to be strongest on human rights, and on global health. We find about half of these ground their guidelines in international human rights law and incorporate the key principles; even these could go further, especially in suggesting ways to operationalize them. Those that adopt the ethics framework are particularly weak in laying out standards for accountability, often focusing on ‘transparency’, and remaining silent on enforceability and participation which would effectively protect the social good. These guidelines mention human rights as a rhetorical device to obscure the absence of enforceable standards and accountability measures, and give their attention to the single right to privacy. These ‘ethics’ guidelines, disproportionately from corporations and other interest groups, are also weak on addressing inequalities and discrimination. We argue that voluntary guidelines are creating a set of de facto norms and re-interpretation of the term ‘human rights’ for what would be considered ‘ethical’ practice in the field. This exposes an urgent need for action by governments and civil society to develop more rigorous standards and regulatory measures, grounded in international human rights frameworks, capable of holding Big Tech and other powerful actors to account.

Policy Implications

  • Emerging consensus on 'ethical AI' is problematic for its lack of grounding in international human rights law and weak emphasis on accountability and participation. These need to be strengthened so that they can be used to defend the public interest and hold powerful private and public bodies involved in design, development and deployment of AI accountable.
  • AI guidelines need to emphasize potential for widening socio-economic inequality, not just discrimination. Capacity and resource constraints in the use of AI enabled technologies is a neglected issue in AI guidelines and debates. These constraints are likely to widen inequalities within and between countries.
  • Ethics guidelines that claim to commit to respect human rights should be scrutinized for how well they include the essential principles and standards – anchoring in international human rights legal instruments accountability, participation, privacy, equality. Those that do not do so are 'ethics branding' themselves as committed to human rights.
  • Governance of AI-design, development and deployment requires a robust human rights framework, not one that is based on 'ethics' that is an open-ended concept, in order to protect public interest from threats of harmful applications.

 

Photo by Pavel Danilyuk from Pexels