Is artificial intelligence facing a diversity crisis?

Is artificial intelligence facing a diversity crisis?

AI has come a long way since its inception in the 1950s, transforming our lives and work. Yet, amidst its rapid progress, the crucial need for ethical standards is often overlooked. In our latest blog, FINBOURNE explore what truly makes a machine intelligent and the pivotal role of data quality in AI model development. With input from the Diversity, Equity, and Inclusion (DEI) data provider, Denominator, FINBOURNE shine a spotlight on AI’s intersection with ethics, society, and its direct impact on DEI efforts in 2023.

Author: FINBOURNE– www.finbourne.com

 

Artificial Intelligence (AI) as a concept has been around since the 1950s. Since then, it has gained huge momentum, revolutionising the way we live and work. However, despite its widespread use, little work has been done to set ethical standards. This is calling for AI systems which handle data responsibly, to power outcomes that are inclusive of all of society.

Whilst many view the advancement of AI as inevitable, recent months have seen experts call for an immediate pause in development. An open letter, signed by over 1000 experts, cited that AI has entered an ‘out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control’.1

In this blog we explore what makes a machine truly ‘intelligent’, and why data quality is crucial in the development and training of AI models. We highlight the relationship between AI and societal and ethical measures and its direct impact on Diversity, Equity and Inclusion (DEI) efforts in 2023.

 

At the heart of AI is data

Machine learning models that are developed and trained to handle data are being developed constantly. However, often the data that sits behind it isn’t. AI is only able to deliver value based on the data on which it has been trained, so high quality data which is representative of the real world is crucial to its success, otherwise its intelligence is flawed.

While AI might allow humans to make faster decisions, if the systems are not trained with real-life data, then they simply inherit the same bias as the human who taught it. The decisions being made therefore aren’t necessarily better, they are arguably more damaging to society.

 

Let’s take some examples:

It was revealed that Google’s AI-powered résumé screening tool showed bias against women. The system downgraded résumés containing terms such as “women” or identified womens’ colleges, which resulted in qualified female candidates being overlooked.

A major U.S. technology company also claimed its facial recognition accuracy rate was 97%, but it was found the company’s training set was more than 77% male and 83% white. This could lead to biased assessments of candidates in AI-powered video interviews where facial and speech recognition algorithms may struggle to effectively analyse facial expressions or dialects from individuals of different racial or ethnic backgrounds.

Unlike humans, AI lacks the ability to make decisions based on social and ethical factors, and without human intervention, they simply inherit the same historic bias. Some hiring algorithms may also consider certain variables as proxies for race (e.g. zip codes or school attended), indirectly influencing the AI system’s decision-making process and potentially resulting in racial bias. It’s clear that without high quality, inclusive data, AI’s effectiveness is severely limited, and potentially damaging in real-life scenarios.

 

Bevon Joseph, Head of Innovation at Skillful.ly a platform committed to diversity and inclusion in hiring practices, and founder of the Greenwood Project, a program which empowers marginalised youth to access opportunities in finance says:

We must hold AI to a standard that fosters equal access and opportunity for all candidates. This includes scrutiny of training data to identify and rectify biased patterns, continuous monitoring of algorithmic outputs, and active involvement of human evaluators. Fostering collaboration between AI experts and diversity advocates is paramount to designing algorithms that align with ethical and equitable principles.

Bevon Joseph, Head of Innovation at Skillful.ly

To meet these goals, how can we prevent unconscious bias from entering the realm of AI? It arises from the humans shaping the data sets in the first place.

 

Unconscious bias in AI stems from people, not AI

AI, like many STEM fields (science, technology, engineering and mathematics) has traditionally been a white, male dominated field. It’s predicted that globally only 22% of jobs in the field of AI are held by women. Whilst there is a major focus on increasing diversity in this field, as AI has developed, it has been skewed by underrepresentation and bias.

Data sets are not neutral, they encompass the world view of those who are collecting and scraping the data, and also those who label it. Whilst we may intend for AI to set us free from human bias, instead we are transferring human bias into AI, one data set at a time.

As time goes on, AI continues to work on the historical data it has collected and the patterns it has built. Whilst it may be unintentional, unconscious bias is perpetuating, putting those who face discrimination even more at a disadvantage. This heightens injustice and acts counterintuitive to any DEI efforts in place.

The bias that exists in AI is only a reflection of those that taught it. The responsibility therefore lies with the people to ensure the data behind the technology is diverse and inclusive.

 

Leveraging AI for positive change

While AI holds immense potential, it’s crucial to address its limitations. It’s clear that the higher the quality of well-labelled data, the more valuable the AI output will be. Humans should train AI on diverse and representative datasets and undergo thorough testing to identify and address potential limitations. By actively monitoring and refining these systems, AI models can reinforce equality rather than counteracting it.

 

The Denominator view:

At Denominator we use AI to drive insights, optimise processes and further expand DEI data availability for customers and partners. AI delivers value but only if you can ensure the correct deployment of it. At Denominator we spent more than two years creating the world’s largest DEI decision training dataset before we started to train our AI models in this area. We did this to ensure we didn’t train the AI on biased data which would then create more biased data and a self-fulfilling negative spiral. Our systems still require human intervention and their decisions funnel back into the models for continuous training and improvement. We think about it more as a symbiosis between tech and people that enables us to maximise the value of AI.

Anders Rodenberg, CEO of Denominator

 

The FINBOURNE view

Access to good quality data is essential for building AI models. This is true for positive DEI outcomes too. Firms struggle with fragmented legacy system estates where data is locked in silos behind the walls of technical and operational restrictions. A modern data stack allows firms to change this, yielding actionable insights with clear data lineage, to ensure a good feedback loop to improve learning and models.

Thomas McHugh, CEO and Co-founder of FINBOURNE Technology

Make possibility reality

Become an IA FinTech Member
and see where it takes you.

Open-Lock_icon.png
Login to your account