Artificial Intelligence and Corporate Human Rights Self-Regulation Initiatives: The Dangers of Letting Business go on as Usual

Research output: Working paperResearch

Abstract

Human rights concerns are increasingly foregrounded in the conversations around powerful emerging technologies such as artificial intelligence (‘AI’). There is an acknowledgement that corporations designing and deploying such technologies have the ability to adversely affect the exercise and protections of the human rights of not only its users but also society at large. In response, corporations increasingly adopt ethical principles to navigate their way around human rights impacts and are invited to consider various permutations of ‘human-rights-by-design.’ Industry and multi-stakeholder initiatives are similarly mushrooming, calling for compliance of AI systems with human rights and ethical standards.

While these corporate and industry-led initiatives are laudable, this article identifies the issues that arises when we leave businesses to define and implement the parameters of rights protection. Corporations seeking to self-regulate demonstrate incompatible interests that manifests itself in the subjective interpretations corporations take on values and rights, lacking consistency and coherency. Maintaining public trust in its commercial AI systems serves merely an instrumental purpose to further its raison d’etre to maximize value for its shareholders, through obfuscation of underlying business models. In turn, corporate actors exercise almost exclusive control in circumscribing rights through the architectural framing of their platforms and technologies that define and limit the rights space by default. It follows from this that despite the adoption of ethical principles extolling transparency and fairness, trade secrets are a defining feature of these new technologies, as corporations also seek to outpace each other in the AI-war. At a macro level, new technologies can introduce potential new threats that are not easily captured by the existing human rights vernacular and may not be able to afford meaningful protection to groups most vulnerable to these new harms. When such private technologies are deployed into the public sector, it further raises the questions on the legitimacy of private actors in determining matters of public good. Realized harms in turn lack an effective remedy due to the lack of enforcement mechanisms that accompany rights watered down as ethical principles.

The article argues that trusting the corporate sector to shape the parameters and navigate the rights space in the age of AI through ethics is misplaced, as it undermines the foundational underpinnings and protection mechanism of human rights. In other words, it cannot be business as usual.
Original languageEnglish
Publication statusIn preparation - 2020

Keywords

  • Faculty of Law
  • artificial intelligence
  • corporate self-governance

Cite this