Tackling AI‘s diversity problem
Empiric Empiric

Tackling AI‘s diversity problem

img
img
Published 24/05/2019

Tackling AI‘s diversity problem

A recent paper published by the AI Now Institute of New York University reports that we are already facing an AI “diversity disaster”. AI is becoming rapid and an autonomous process without human oversight, which can mean that unforeseen outcomes can quickly arise, resulting in AI perpetuating existing iniquities or exaggerate and reinforce negative patterns if the initial algorithm is flawed.

 

img
img

Removing inherit accidental bias

A machine learning algorithm is only as good as the data that has been used to train it. If the algorithm comes across inputs or scenarios that are absent from its dataset then it will be unable to respond appropriately.

Examples of unintended consequences are becoming more common as AI is becoming more widely used in day to day business and transitional processes. From Google’s sexist job recruitment AI in 2018, through to facial recognition systems misidentifying or unable to recognize black faces and more alarmingly, the US judiciary system algorithms rating black defendants as posing a higher risk than white defendants, flaws in AI coding can have real world consequences. 

img
img

Improve the transparency and diversity of AI development

Google, Facebook and Microsoft have been named by the Future Today Institute as three of the nine companies driving the future of AI – but less than 4% of their staff in the US are black (compared to almost 13% of America) and their workforce gender spilt is still far from equal. Diversity isn’t just for hiring either, though; organisations seeking to drive change need to pursue inclusive policies across the board and to champion these values. 

It is crucial for organisations building AI systems to ensure that a diverse range of individuals directly contribute to the process – from the inception of the system, all the way through to auditing. Drawing on different perspectives is the only way that the important questions will be asked, yet the current tech landscape does a poor job of representing the overall population.

Improving the transparency of how AI operates is also key. If a user does not understand how the algorithm has reached a decision (or the manufacturer will not share this information), then relying on it is an act of faith. There needs to be an onus on responsible operators to run audits, and to monitor what their algorithms have learned and how they reach their goals.

img
img

AI: a reflection of diversity

The real answer to tackling AI ethics is to ensure that a diverse range of people – spanning race, class and gender – are involved in the building of algorithms; while the organisation actively strives to consider the consequences that these systems may have for society. 

“The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems,” Kate Crawford, a co-author of the AI Now Institute’s report, told The Guardian. “The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.”

The proposed Algorithmic Accountability Act in the US, for example, calls for companies to be required to screen their algorithms for bias if they have more than 1 million users or $50m in annual revenue. While this may be difficult to negotiate in practice, it points to the growing prominence of the issue – and calls for regulation are likely to only grow louder.

But if businesses take steps to act more responsibly by making their AI more transparent to users and ensure that a diverse talent pool have been involved in the creation and testing of algorithms, they may avoid the threat of regulation. 

img

About Empiric

Empiric is a multi-award winning business and one of the fastest growing technology and transformation recruitment agency's specialising in data, digital, cloud and security. We supply technology and change recruitment services to businesses looking for both contract and permanent professionals.


Read more (pdf download)

Empiric are committed to changing the gender and diversity imbalance within the technology sector. In addition to Next Tech Girls we proactively target skilled professionals from minority groups which in turn can help you meet your own diversity commitments. Our active investment within the tech community allows us to engage with specific talent pools and deliver a short list of relevant and diverse candidates.

For more information contact 

02036757777

To view our latest job opportunities click here.

Log-In

Login to your Empiric account.

Forgot password?

Register

Don't have an account yet?

Create an account now and get access to our online features.

Register

This website uses cookies to ensure you get the best experience on our website