Covering Disruptive Technology Powering Business in The Digital Age

image
If AI is Intelligent, Why Does It Still Have Biases?
image
November 30, 2020 News

 

There is this famous saying among humans that “you are what you eat”. It means that if you eat healthily then you will be a healthy person and vice versa. However, you cannot always monitor the food you are eating every time, as something may look like it is good but has underlying effects on your body, unbeknownst to you.

Artificial Intelligence is just the same, as it will only produce good outputs if you feed it good inputs, although the data points you are providing to the algorithm may be skewed in the first place. Intentional or not, this mistake will introduce biases in AI and its subset machine-learning that can do more harm than good.

These biases, of course, came from the humans themselves. Society has its prejudices and this unfairness is still prevalent until today. It is also apparent that these biases will creep into AI as we become more advanced since humans are the one who created it and put data into it.

For example, there was a discussion on Twitter just a few months ago where users noticed that the algorithm of the platform crops photos favourable to white people, leaving people of colour out of the picture. Not only with real-life photos of humans but also with fictional characters.

This is only one of the instances that AI tends to have biases against certain demographics, such as race, gender and age. Even a company as big as Amazon struggles with this problem, when it was revealed that their AI recruiting tool showed bias against women, all because the algorithm favours words found in resumes that mostly men use.

Such occurrences can and will affect the lives of unsuspecting individuals and could have serious consequences, for example, if biases creep into a life or death situation, say an AI tool in hospitals? It is only imperative for organisations and entities to discuss what they can do to prevent or mitigate existing biases in AI. But first, let us look at some reasons why these biases come into existence in the first place.

  • Performance over anything else: The goal of AI is to automate and ease complex processes so that humans will have assistance in everyday work. As such, developers of AI tend to prioritise the performance of their machine-learning algorithms to aim for faster implementation. Sometimes, a huge amount of data can improve AI but in the process, the content of this data is often overlooked just for the sake of performance.
  • Incomplete data: AI is just as good as the data it consumes. So if AI is fed incomplete data, there will always be something missing to its implementation, even though it is not noticeable. Say you are comparing two groups in your AI, one which has a thousand data points while the other only has half of it. Of course, the AI will favour the one which has more data since it will only rely on what it is seeing. If you search a profession that is dominated by a certain demographic in Google image, it will tend to show you that demographic even though there are existing diversities for it.
  • Lack of context: What can be applied to one thing does not necessarily mean it can be applied to another or in general. There may also be a case where AI will just totally ignore a factor for its algorithm since there is no data about it, which will in turn affect people with that intention. Let’s say if you are applying for a loan to pay for your pets’ medication but the AI tool can only accept hospitalisation for people.
  • Prejudiced data with underlying causes: More often than not, the data points entering the AI have underlying causes which the developer may not be aware of. Would this be the fault of the one who fed it to the AI or the source of data itself? Frequently, it is the latter since data is only retrieved from society, which is inherently biased. This will go back to solving the prejudices in the real world but that is another discussion.

There are still ways and approaches to prevent and even mitigate biases and technologies such as AI should also keep up with the changing times as we journey into a more progressive future. To do this, the following should be considered:

  • Diversity in the community: The root of biases in AI is human themselves. In that case, there should be a diverse set of humans representing sectors and minorities in feeding data points in the system. With this variety, there would be more understanding of certain data and context is deemed as important.
  • Maintain balance as much as possible: Data points of one group should be equal to another to ensure fairness. With this, your AI will not be able to distinguish a biased pattern or correlation to a specific demographic. Everyone can be a police, the same way everyone can be a criminal.
  • Education, research and peer reviews: It is important to educate yourself about various applications of AI so you can see how the others solved certain issues and also observe what were the shortcomings in some algorithms. Reviews from the peers should also be conducted so you will have perspectives aside from yours.

AI is only as good as the humans creating it. As what Harvard Business Review said in an article, “only a multi-dimensional and multi-stakeholder approach can truly address AI bias by defining a values-driven approach, where values such as fairness, transparency and trust are the centre of creation and decision-making around AI”. Because truly, humans will be the ones to benefit from AI so humans might as well start to better themselves from within.

(0)(0)

Archive