How the UN Can Prevent AI from Automating Discrimination

The AI for Good Global Summit took place in Geneva on 8 July 2025. Credit: ITU/Rowan Farrell

 
The Summit brought together governments, tech leaders, academics, civil society and young people to explore how artificial intelligence can be directed toward Sustainable Development Goals (SDGs) – and away from growing risks of inequality, disinformation and environmental strain, according to the UN.

 
“We are the AI generation,” said Doreen Bogdan-Martin, chief of the International Telecommunications Union (ITU) – UN’s specialized agency for information and communications technology – in a keynote address. But being part of this generation means more than just using these technologies. “It means contributing to this whole-of-society upskilling effort, from early schooling to lifelong learning,” she added.

By Chimdi Chukwukere and Gift A. Nwamadu
ABUJA, Nigeria, Aug 14 2025 – Artificial Intelligence (AI) is reshaping the world at a speed we’ve never seen before. From helping doctors detect diseases faster to customizing education for every student, AI holds the promise of solving many real-world problems. But along with its benefits, AI carries a serious risk: discrimination.

As the global body charged with protecting human rights, the United Nations—especially the UN Human Rights Council and the Office of the High Commissioner for Human Rights (OHCHR)—has a unique role to play in ensuring AI is developed and used in ways that are fair, inclusive, and just.

The United Nations must declare AI equity a Sustainable Development Goal (SDG) by 2035, backed by binding audits for member states. The stakes are high. A 2024 Stanford study warns that if AI bias is left unchecked, 45 million workers could lose access to fair hiring by 2030, and 80 percent of those affected would be in developing countries.

The Promise—and Peril—of AI

At its core, AI is about using computer systems to solve those problems or perform those tasks that us to use human intelligence. Algorithms drive the systems that make these possible—sets of instructions that help machines make sense of the world and act accordingly.

But there’s a catch: algorithms are only as fair as the data they are trained on and the humans who designed them. When the data reflects existing social inequalities, or when developers overlook diverse perspectives, the result is biased AI. In other words, AI that discriminates.

Take, for example, facial recognition systems that perform poorly on people with darker skin tones. Or hiring tools that favor male candidates because they’re trained on data from past hires in male-dominated industries.

Or a LinkedIn verification system that can only verify NFC-enabled national passports that the majority of Africans don’t yet possess. These are more than technical glitches; they are human rights issues.

What the UN Has Already Said

The UN is not starting from scratch on this. The OHCHR has already sounded the alarm. In its 2021 report on the right to privacy in the digital age, the OHCHR warned that poorly designed or unregulated AI systems can lead to violations of human rights, including discrimination, loss of privacy, and threats to freedom of expression and thought.

The report asked powerful questions we must keep asking:

    • ● How can we ensure that algorithms don’t replicate harmful stereotypes?
    • ● Who is responsible when automated decisions go wrong?
    • ● Can we teach machines our values? And if so, whose values?

These are very vital, practical questions that go to the heart of how AI will shape our societies and who will benefit or suffer as a result, and I commend the UN for conceptualizing these questions.

UNESCO, another UN agency, has also taken a bold step by adopting the Recommendation on the Ethics of Artificial Intelligence, the first global standard-setting instrument of its kind. Their Recommendation emphasizes the need for fairness, accountability, and transparency in AI development, and calls for banning AI systems that pose a threat to human rights.

This is a good start. But the real work is just beginning.

The Danger of Biased Data

A major driver of AI discrimination remains biased data. Many AI systems are trained on historical data; data that often reflects past inequalities. If a criminal justice algorithm is trained on data from a system that has historically over-policed Black communities, it will likely continue to do so.

Even well-meaning developers can fall into this trap. If the teams building AI systems lack diversity, they may not recognize when an algorithm is biased or may not consider how a tool could impact marginalized communities.

That’s why it’s not just about better data. It’s also about better processes, better people, and better safeguards.

Take the ongoing case with Workday as an example.

When AI Gets It Wrong: 2024’s Most Telling Cases

In one of the most significant AI discrimination cases moving through the courts, the plaintiff alleges that Workday’s popular artificial intelligence (AI)-based applicant recommendation system violated federal antidiscrimination laws because it had a disparate impact on job applicants based on race, age, and disability.

Judge Rita F. Lin of the US District Court for the Northern District of California ruled in July 2024 that Workday could be an agent of the employers using its tools, which subjects it to liability under federal anti-discrimination laws. This landmark decision means that AI vendors, not just employers, can be held directly responsible for discriminatory outcomes.

In another case, the University of Washington researchers found significant racial, gender, and intersectional bias in how three state-of-the-art large language models ranked resumes. The models favored white-associated names over equally qualified candidates with names associated with other racial groups.

In 2024, a University of Washington study investigated gender and racial bias in resume-screening AI tools. The researchers tested a large language model’s responses to identical resumes, varying only the names to suggest different racial and gender identities.

The financial impact is staggering. A 2024 DataRobot survey of over 350 companies revealed: 62% lost revenue due to AI systems that made biased decisions, proving that discriminatory AI isn’t just a moral failure—it’s a business disaster. It’s too soon for an innovation to result in such losses.

Time is running out. A 2024 Stanford study estimates that if AI bias is not addressed, 45 million workers could be pushed out of fair hiring by 2030, with 80 percent of those workers living in developing countries. The UN needs to take action now before these predictions turn into reality.

What the UN Can—and Must—Do

To prevent AI discrimination, the UN must lead by example and work with governments, tech companies, and civil society to establish global guardrails for ethical AI.

Here’s what that could look like:

    • 1. Develop Clear Guidelines: The UN should push for global standards on ethical AI, building on UNESCO’s Recommendation and OHCHR’s findings. These should include rules for inclusive data collection, transparency, and human oversight.
    • 2. Promote Inclusive Participation: The people building and regulating AI must reflect the diversity of the world. The UN should set up a Global South AI Equity Fund to provide resources for local experts to review and assess tools such as LinkedIn’s NFC passport verification. Working with Africa’s Smart Africa Alliance, the goal would be to create standards together that make sure AI is designed to benefit communities that have been hit hardest by biased systems. This means including voices from the Global South, women, people of color, and other underrepresented groups in AI policy conversations.
    • 3. Require Human Rights Impact Assessments: Just like we assess the environmental impact of new projects, we should assess the human rights impact of new AI systems—before they are rolled out.
    • 4. Hold Developers Accountable: When AI systems cause harm, there must be accountability. This includes legal remedies for those who are unfairly treated by AI. The UN should create an AI Accountability Tribunal within the Office of the High Commissioner for Human Rights to look into cases where AI systems cause discrimination.
    • This tribunal should have the authority to issue penalties, such as suspending UN partnerships with companies that violate these standards, including cases like Workday.
    • 5. Support Digital Literacy and Rights Education: Policy makers and citizens need to understand how AI works and how it might impact their rights. The UN can help promote digital literacy globally so that people can push back against unfair systems.
    6. Mandate Intersectional Audits: AI systems should be required to go through intersectional audits that check for combined biases, such as those linked to race, disability, and gender. The UN should also provide funding to organizations to create open-source audit tools that can be used worldwide.

The Road Ahead

AI is not inherently good or bad. It is a tool, and like any tool, its impact depends on how we use it. If we are not careful, AI could lengthen problem-solving time, deepen existing inequalities, and create new forms of discrimination that are harder to detect and harder to fix.

But if we take action now—if we put human rights at the center of AI development—we can build systems that uplift, rather than exclude.

Ahead of the UN General Assembly meeting in September, the United Nations must declare AI equity a Sustainable Development Goal (SDG) by 2035, backed by binding audits for member states. The time for debate is over; the era of ethical AI must begin now.

The United Nations remains the organization with the credibility, the platform, and the moral duty to lead this charge. The future of AI—and the future of human dignity—may depend on it.

Chimdi Chukwukere is a researcher, civic tech co-founder, and advocate for digital justice. His work explores the intersection of technology, governance, and social justice. He holds a Masters in Diplomacy and International Relations from Seton Hall University and has been published at Politics Today, International Policy Digest, and the Diplomatic Envoy.

Gift Nwammadu is a Mastercard Foundation Scholar at the University of Cambridge, where she is pursuing an MPhil in Public Policy with a focus on inclusive innovation, gender equity, and youth empowerment. A Youth for Sustainable Energy Fellow and Aspire Leader Fellow, she actively bridges policy and grassroots action. Her work has been published by the African Policy and Research Institute addressing systemic barriers to inclusive development.

IPS UN Bureau

 


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:’https’;if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+’://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);  

Filed in: Latest World News

Share This Post

Recent Posts

© South Africa Spectator. All rights reserved.
WordPress theme designed by Theme Junkie. */ ?>