AI must not become a driver of human rights abuses
It is the responsibility of AI companies to ensure their products do not facilitate violations of human rights.
On May 30, the Center for AI Safety released a public warning of the risk artificial intelligence poses to humanity. The one-sentence statement signed by more than 350 scientists, business executives and public figures asserts: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”
It is hard not to sense the brutal double irony in this declaration.
First, some of the signatories – including the CEOs of Google DeepMind and OpenAI – warning about the end of civilisation represent companies that are responsible for creating this technology in the first place. Second, it is exactly these same companies that have the power to ensure that AI actually benefits humanity, or at the very least does not do harm.
They should heed the advice of the human rights community and adopt immediately a due diligence framework that helps them identify, prevent, and mitigate the potential negative impacts of their products.
While scientists have long warned of the dangers that AI holds, it was not until the recent release of new Generative AI tools, that a larger part of the general public realised the negative consequences it can have.
Generative AI is a broad term, describing “creative” algorithms that can themselves generate new content, including images, text, audio, video and even computer code. These algorithms are trained on massive datasets, and then use that training to create outputs that are often indistinguishable from “real” data – rendering it difficult, if not impossible, to tell if the content was generated by a person, or by an algorithm.
To date, Generative AI products have taken three main forms: tools like ChatGPT which generate text, tools like Dall-E, Midjourney and Stable Diffusion which generate images, and tools like Codex and Copilot which generate computer code.
The sudden rise of new Generative AI tools has been unprecedented. The ChatGPT chatbot developed by OpenAI took less than two months to reach 100 million users. This far outpaces the initial growth of popular platforms like TikTok, which took nine months to reach as many people.
Throughout history, technology has helped advance human rights but also created harm, often in unpredictable ways. When internet search tools, social media, and mobile technology were first released, and as they grew in widespread adoption and accessibility, it was nearly impossible to predict many of the distressing ways that these transformative technologies became drivers and multipliers of human rights abuses around the world.
Meta’s role in the 2017 ethnic cleansing of the Rohingya in Myanmar, for example, or the use of almost undetectable spyware deployed to turn mobile phones into 24-hour surveillance machines used against journalists and human rights defenders, are both consequences of the introduction of disruptive technologies whose social and political implications had not been given serious consideration.
Learning from these developments, the human rights community is calling on companies developing Generative AI products to act immediately to stave off any negative consequences for human rights they may have.
So what might a human rights-based approach to Generative AI look like? There are three steps, based on evidence and examples from the recent past, that we suggest.
First, in order to fulfil their responsibility to respect human rights, they must immediately implement a rigorous human rights due diligence framework, as laid out in the UN Guiding Principles on Business and Human Rights. This includes proactive and ongoing due diligence to identify actual and potential harms, transparency regarding these harms, and mitigation and remediation where appropriate.
Second, companies developing these technologies must proactively engage with academics, civil society actors, and community organisations, especially those representing traditionally marginalised communities.
Although we cannot predict all the ways in which this new technology can cause or contribute to harm, we have extensive evidence that marginalised communities are the most likely to suffer the consequences. The initial versions of ChatGPT engaged in racial and gender bias, suggesting, for instance, that Indigenous women are “worth” less than people of other races and genders.
Active engagement with marginalised communities must be part of the product design and policy development processes, to better understand the potential impact of these new tools. This cannot be done after companies have already caused or contributed to harm.
Third, the human rights community itself needs to step up. In the absence of regulation to prevent and mitigate the potentially dangerous effects of Generative AI, human rights organisations should take the lead in identifying actual and potential harm. This means that human rights organisations should themselves help to build a body of deep understanding around these tools and develop research, advocacy, and engagement that anticipate the transformative power of Generative AI.
Complacency in the face of this revolutionary moment is not an option – but neither, for that matter, is cynicism. We all have a stake in ensuring that this powerful new technology is used to benefit humanity. Implementing a human rights-based approach to identifying and responding to harm is a critical first step in this process.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.