7 minute read

AI

Technology

AI

Christina Blacklaws considers if we are sleepwalking into a nightmare world of technological bias. Christina is the former President of the Law Society of England and Wales, business adviser to a range of Boards and businesses on transformational change, technology development and diversity and inclusion.

The future is biased?

There’s fascinating research which shows that from stab vests to the size of our mobile phones to crash test dummies, the world is basically designed for men. 1 The recent debacle about PPE in hospitals being designed for the typically larger frame of men, despite a very high number of staff being female, shows this problem hasn’t gone away despite being the subject of a detailed TUC report in 2017. 2

Women are the majority of the global population but by no means the dominant force. And this rings true in the UK, in our world – the law, where although we women make up more than half of practising lawyers, only 31% are partners and an even smaller number are business owners.

I wanted to get to grips with this when I became President of the Law Society of England and Wales. So, we conducted the largest ever global survey and completed 250 roundtables in 19 countries to discover why female lawyers were not getting to positions of seniority which were commensurate with our desires, our knowledge, our experience and our sheer numbers. 3

And what did we find? Systemic bias.

I’d assumed that the lives of women lawyers across the globe would differ wildly, but our research found that the challenges and experiences of female lawyers around the world are very similar. We face social and cultural expectations about how we should behave and what we can and should do because we are female.

In every country, this gender stereotyping prejudiced working women, particularly those who had children, and prevented capable, excellent, committed women from realising their ambitions.

How wrong is it that today anyone is not able to progress due to their gender, race, or any other protected characteristic? Surely, we must all agree that talent is equally spread and that everyone should have the same chances and opportunities as everyone else but, in an esteemed profession like the law, we found that not to be the case for women.

This is all really concerning but what I want to focus on is how it could get an awful lot worse!

Most of us live with the hope and expectation that things will improve with the greater use of technology. That computers are objective, non-biased, fair and that their decisions can be relied on. However, I’m worried that as we move to an increasingly automated, data-driven world where the computer decides almost everything, all the bias that we know exists, could be hard wired into future decision-making.

Bias, frankly, blights our world and makes it an unfair and unkind place for people who aren’t from the dominant culture – and this includes women, people of colour and the disabled, to name but a few.

So, the stakes are high, and the not-too-distant future could be really scary.

First off, let’s understand how computers make decisions

The computer is programmed with a set of instructions called algorithms. The algorithms are fed information so they can learn and understand. They are then able to apply themselves / the instructions to any other information and come up with answers. Well, what’s wrong with that, you may ask. All seems perfectly fair and reasonable, objective etc.

But there are two big problems here:

■ how the algorithms are programmed and by whom,

■ the information – the data – they are fed on.

Picture a computer programmer – bet you’re not thinking of someone who looks like me and you’d be right. According to the latest Office of National Statistics data, of the nearly 1 million people employed in IT in the UK, only 18% are female. In the US, in Silicon Valley, white men represent over 80% of executives.

So, most of the people writing the programmes and their bosses are all from one particular group and this can lead to serious blind spots and potential bias.

And just like stab vests and cars designed for men’s bodies, this can be really dangerous for people who don’t come from that dominant group.

The second problem relates to the information / data that is used to train the algorithms. One example is now quite famous. Amazon, as a tech company, quite understandably wanted to use technology to ensure they got the best candidates for the top jobs, so they started to use algorithms to sort through CVs. 4

But the algorithms were trained on information from the CVs received from ultimately successful candidates over the last 10 years. Because the tech industry is dominated by men, most of those CVs were from men and the algorithms learned that being male equated with success and started to downgrade women!

So, if had ‘captain of the hockey team’ on your CV, that might be looked on favourably, but if it said ‘captain of the women’s hockey team’ then it was probably destined for the bin. Women didn’t represent successful candidates – as there were so few women in top positions and therefore the system started to reinforce this – it’s known as algorithmic bias. Thankfully, Amazon stopped using the system when they understood the consequences, but damage must have been done.

I’ll give you another example from my own experience. I chaired a commission which investigated the use of algorithms in the criminal justice system.

We found that they are being used widely by police forces all over the country for things like facial recognition (when faces in a crowd, and this could be your face without so much as a by-your-leave, are cross-referenced against faces on ‘watch lists’) and predictive crime mapping. Remember that Tom Cruise film, ‘Minority Report’ where the role of the police was to prevent crimes from happening? Well, this is very similar: an algorithm is used to work out where police officers should be deployed to detect more crimes.

But this can create real risks to fairness and promote discrimination. It doesn’t take a genius to work out that the more police officers you put in one place, the more crimes they are going to detect there. The use of the algorithm becomes self-reinforcing.

Also, there’s no way truly to measure crimes in our society, only proxies such as convictions, or even more problematically, individuals arrested. As it is commonly accepted that the justice system under-serves certain populations and overpolices others, these biases are reflected in the information that the algorithm uses. This creates a real threat to our basic human rights.

The perfect storm

So, if you add the possible bias of the programmers (and every one of us holds our own set of biases but that’s another article) together with the bias in the information that is used to train algorithms (which reflect the past and current bias of our societies) and finally the growing body of research that confirms that human beings may lack the confidence and knowledge to question or override algorithmic recommendations, then you could end up with a perfect storm.

A world where increasing numbers of decisions are being made by unchecked computers based on information that discriminates against sections of society in a way which none of us would want but where no one is able or willing to say the computer’s wrong!

Can we solve bias in AI?

My call to action is to make sure that we do not sleepwalk into this dystopian nightmare. There’s another future we could have where, if we can get this right, we could reduce or even remove the bias that blights our current decision-making, where computers can do the heavy lifting for us but in a way that is fair and right and good.

At least in part, the solutions could lie in a bias removal approach to classification of algorithms. Some systems are now checked by switching over data (male to female/ female to male; switching ethnicities etc) to check whether the algorithm produces differing results. If it does then it’s back to the drawing board. 5 This is challenging, painstaking stuff but if we want a fair and unbiased world, it is surely worth the effort.

As lawyers and concerned citizens, we can make a difference. We all need to play our part to make sure that our increasingly computerised and automated world can eradicate systemic bias, not hardwire it into our futures. ■

Christina Blacklaws

Christina Blacklaws

Former President of the Law Society of England & Wales

1. Caroline Criado-Perez, Invisible Women: Exposing Data Bias in as World Designed for Men

2. PPE: ‘one size fits all’ design is a fallacy that’s putting female health staff at risk | RCNi Microsoft Word – PPE and women guidance pdf (tuc.org.uk)

3. Women in law roundtable discussions across the regions | The Law Society Women in Leadership in Law | The Law Society

4. https://www.theguardian.com/technology/2018/oct/10/ amazon-hiring-ai-gender-bias-recruiting-engine

5. 4 Ways to Address Gender Bias in AI (hbr.org)

This article is from: