Why addressing AI-driven discrimination is so important
Artificial Intelligence (AI) has the potential to deliver enormous value. Yet machine learning bias, also known as algorithm bias or Artificial Intelligence bias, can see algorithms reflecting human bias.
Let's hear from Sophia Ignatidou who works as Group Manager for AI and Data Science at the UK's Information Commissioner’s Office (ICO).
"Growing up, the concepts of equity and inclusion were always close to my heart. As a woman who also became an immigrant, I don’t think I would have got as far in my life if I didn’t believe in these values so strongly. My career began as a journalist, working for newspapers across both Greece and the UK. I wanted to have a more meaningful impact on the world and in the hope that a career change would enable this, I decided to study international relations and diplomacy," explains Sophia.
After immersing herself in the world of Artificial Intelligence (AI) as part of a leadership fellowship at Chatham House, Sophia secured her position within the ICO. In an industry where women are often underrepresented, she is both excited and proud to be leading a team consisting primarily of women working in tech and innovation. "It is so important for organisations, especially those involved in shaping policy, to champion diversity and inclusion in the workplace," comments Sophia.
From chatbots in classrooms to artwork by algorithm, AI has quickly established itself in people's everyday lives. As the usage of these technologies grows, it has never been more important to turn attention to tackling unfairness in these systems and the impact they have on the world. While the academic community has been flagging issues about the discriminatory effects of AI for some time now, these are now a regular feature in both the news headlines and on the political agenda.
There are numerous examples of algorithms reinforcing the gender stereotypes that we fight so hard to dispel – whether that’s censoring or objectifying images of female bodies, or disproportionally rejecting female applicants from a particular job. But it’s important to understand both the scale and the nuances of this issue.
So how can bias appear in AI?
"AI is trained and tested by existing data, so its learnings will often reflect bias that is already present. We cannot expect these systems to create an automated space without any human error – after all, humans are the ones building them," says Sophia.
"Discrimination, whether that’s towards gender, ethnicity or other personal data, can often be traced back to the original data used to train a model. If there is a lack of accurate data highlighting the needs, interests and experience of women, this will be reflected in how the AI performs."
1. Unbalanced training data
An AI model is trained to produce outcomes that best fit what it has been taught. However, the data used to train the AI might not fairly represent the demographics of society. If men and women aren’t equally represented in a dataset for example, the AI system will not be able to produce appropriate results, increasing the risk of unfair outcomes.
2. Historical bias in the training data
If the training datasets and the way these have been labelled reflect an existing stereotype, the AI system will reproduce the same patterns of discrimination – particularly in fields and occupations where this has historically been a problem. For example, this could be a problem if the data comes from a traditionally male-dominated industry. Interestingly, an AI system can also be used to identify unfair discrimination – it can be a powerful tool for uncovering hidden patterns in data, both positive and negative. AI can be an opportunity to hold a mirror up to society and force us to confront inequalities.
Working towards equity in AI development
"If AI-driven discrimination is left unaddressed, we could end up shutting out the very people who are best placed to challenge it," suggests Sophia. In order to create technology that recognises and addresses discrimination, there needs to be a focused effort across the AI industry. From researchers to developers, engineers to data scientists, we must make it the responsibility of anybody working with AI to tackle this problem. Those working with AI should take care to determine and document their approach to addressing unfair discrimination from the start – that way, safeguards can be put into place to try and prevent unwanted bias from rearing its head further down the line By now, the problem of biased training data has been widely documented, but it is important to note that unfair outcomes are not limited to the training stage of AI. We need to champion equity in every aspect of the AI lifecycle to ensure that future models are as fair as possible and comply with regulations," explains Sophia.
Where does the ICO come in?
The ICO is the UK’s independent data protection regulator, working to empower people through information. "We’ve been working on AI and its surrounding issues for a while now, trying to empower and inform individuals about their rights but also help organisations use personal data responsibly and confidently. In Sophia's team, colleagues are aware that gender-based discrimination is just one of various potential risks that AI poses. Using data protection legislation and its principles, the ICO is working hard to raise awareness among AI developers about the tools at their disposal to mitigate unfair discrimination," says Sophia.
Tackling AI-driven discrimination is one of the key priorities that the ICO sets out in its three-year strategic plan ICO25. As well as investigating concerns about the potential risks posed by the technology, the ICO has issued guidance and practical toolkits to educate AI developers on ensuring their algorithms treat people and their information fairly. Naturally, the ICO’s work in this space is expanding. With the huge amounts of personal data involved in training AI, it is important to encourage transparency and best practice. In 2022, the ICO hosted workshops with the Alan Turing Institute and the ICO will soon be publishing an update on fairness in AI as part of its ongoing improvement of its existing guidance on AI and Data Protection. Everything from that guidance to its research programme can be found on the ICO website.