How to break algorithm biases in Artificial Intelligence (AI)

As we know, unconscious and conscious bias is something we are working to break in humans, but what about technology? 

Bias in Artificial Intelligence 

Sarika Hussain is a Quality Improvement Manager, speaker and Professional Scrum Master, and shares information about bias in Artificial Intelligence (AI) systems as part of her sArIka speaks YouTube series. 

In a video to celebrate International Women’s Day's #BreakTheBias theme, Sarika explains different types of algorithmic bias and how they are introduced in the society. 

Different types of algorithmic bias 

There are five types, according to Sarika, with the first being Training Bias – this is due to the way the data has been captured in itself being incorrect. 

“For example, if the device through which the data is supposed to be captured is in itself faulty, that then means garbage in, garbage out,” she explains. 

Under-represented data 

The second is Underrepresented Data: “This is a big one because in the real world scenario we have an underrepresented population in terms of gender, age and race,” explains Sarika. 

This then means that incorrect representation comes into the data, so any kind of prediction or outcome coming out of AI systems tends to be biased. 

The third one is that we tend to settle for Shortcuts

“Because these technologies require information in terms of numbers, then how do we convert the softer elements? For example, how do you convert the relationship with your siblings into a number?” questions Sarika. 

“It’s quite difficult, so after multiple trial and errors we tend to settle at a score and a shortcut.”

 

Amplification of data 

Looking at the fourth type of algorithm bias, Sarika focuses on Amplification of Data

“For any e-commerce site, if they receive positive feedback and use only that to build an AI system, then it will show that those products are very good,” she says. 

Sarika highlights that this kind of prediction then tends to amplify products, despite there being unfair representation in terms of feedback. 

Malicious attack 

And finally, the fifth type is Malicious Attack of Data: “For example, there was a situation where a bot was introduced in Twitter and it took information from a live conversation. So when the data is being pulled by a real environment, and if your real environment is biased, you tend to get those types of bias outcomes.”

Breaking the data bias 

Looking at how to break the biases listed, Sarika says that firstly, if there is any kind of recommendation from AI systems, they should be critically evaluated. “Challenge it and ask a number of questions before accepting it,” Sarika advises. 

Secondly, Sarika says to use the right data, which isn’t biased. “For those building AI systems I would request that you take proper care and build it ethically.” 

“The other thing that comes to my mind is, why don’t we test the AI systems just like a COVID vaccine before deploying it to society? It needs to be tested and checked and only then allowed,” says Sarika. 

Highlighting that it could be done under the ethics of AI, where there would be stricter laws and some regulations in place, Sarika concludes that: “Tomorrow we would have a much safer society and lesser discrimination.”    

What do you think about bias in AI data? Share your thoughts with us and let’s help break data bias together! 

 

 

IWD Toolkit

Join the IWD Community