Skip to main content
Return

Insights from the HeForShe Summit: Disrupting Bias in Artificial Intelligence

Panel 2 at HeForShe Summit

At the recent HeForShe Summit, leaders from both the public and private sectors came together to address the growing concerns surrounding artificial intelligence (AI) and the imperative need to develop this technology to prioritise safety and inclusivity. The panel discussion titled "Disrupting Bias in Artificial Intelligence" explored AI's ability to create innovative opportunities and its propensity to exacerbate existing gender inequalities and stereotypes.  

AI language models draw their material from already published information, meaning they can replicate and amplify existing gender biases. As Sasha Luccioni, Research Scientist and Climate Lead at the machine learning company Hugging Face, highlighted, "AI bias doesn't come from thin air - it comes from the patterns we perpetuate in our societies." Any biases or patterns that are present within this data are not just perpetuated but also amplified by AI. 

With such widespread views, it should come as no surprise that AI, in an attempt to mimic human expression, would reproduce stereotypes. Leonardo Nicoletti, a data visualization journalist at Bloomberg, aptly points out that generative artificial intelligence not only replicates stereotypes but also magnifies them. A poignant example is when AI image software was asked to display images of "judges," only 3% of the pictures it generated depicted women. 

"Generative artificial intelligence doesn't just replicate stereotypes or disparities that you see in the real world, it actually exacerbates these and makes them appear much worse than they really are." - Leonardo Nicoletti, a data visualization journalist at Bloomberg. 

Maya Nicole Dummet, a student at Harvard University, echoes this sentiment by emphasizing that algorithmic biases, including gender bias and its intersections with race and socioeconomic status, have far-reaching negative impacts that extend beyond the workplace. These issues demand attention and a respectful disruption, especially as the utilization of AI continues to grow. 

One response to this challenge, as voiced by Joakim Reiter, Vodafone's Chief External and Corporate Affairs Officer, is to impose clear rules and a clear ethical compass for AI projects. 

"Frankly, we need guardrails," he said. "You cannot allow companies or individuals to have a free-for-all and experiment with something that has an impact on society." He added that "companies are not in isolation. When you launch products and services, you have a responsibility to understand how those products and services interact with society, including societal norms and discrimination and biases in society." - Joakim Reiter, Vodafone Chief External and Corporate Affairs Officer. 

Dr. Joy Buolamwini, Founder of the Algorithmic Justice League and author of "Unmasking AI," called upon leaders to comprehensively understand legacy systems. This knowledge is vital as we adopt new solutions and discuss the future, ensuring we actively scrutinize and rectify the current systems. 

The HeForShe Summit highlighted the critical need to address AI biases and promote ethical AI technology. It is evident that biases are deeply ingrained in the data from which AI learns, and as such, they must be tackled with vigilance and a commitment to respect and inclusivity. By imposing clear rules and ethical guidelines and actively scrutinizing existing systems, we can harness the potential of AI while ensuring it contributes to a more equitable and fair future for all. 

Watch the full recording here:

Related Articles