Covering Disruptive Technology Powering Business in The Digital Age

Home > DTA news > News > The Future of AI Should Be Fair and Humane – EmTech Asia 2020
The Future of AI Should Be Fair and Humane – EmTech Asia 2020
August 10, 2020 News

 

Day by day, the emergence of artificial intelligence and its subsets are being applied to many aspects of our daily lives. There are machine learning algorithms embedded on our phones that make our browsing more efficient, smart robots in many industries aiding in manufacturing and productions and even language models that can generate smart texts.

These functions of AI, as many people think, can compromise the physical workforce and introduce unfairness to its systems. People are frightened and for the right reason. This begs the question – is AI more beneficial or detrimental to humanity? The ethical dilemmas that come with AI became a hot topic of discussion at EmTech Asia 2020, a conference organised by MIT, held in a virtual-only format for the first time. Featuring various speakers from technology institutions, the conference discussed the current state of AI and its impact in many industries, how it could affect the morals and ethics of humanity and its development in the future to observe values to the society.

Peter Norvig from Google started the conference by touching on the importance of “Teaching Computers to be Fair”. Norvig cited that AI is creating new opportunities for many people around the world, but it is also raising questions about fairness and bias in its system. With machine learning, data is collected, fed into a model with certain objectives and predictions are made – however, unfairness can be introduced along in the process.

“The data might be biased, it might have the wrong type of model that doesn’t mitigate that bias and it might have chosen an unfair objective”, explained Norvig. This issue, according to him, may help certain people and leave others unaided. In businesses, a typical example can be found in banks where loans are granted to certain individuals but not others. This makes you think if those granted individuals really deserved the approvals and with AI, there might be a thin line because of confirmation bias.

To avoid this, Norvig provided a checklist to be observed by companies to help their algorithms achieve fairness – including a fair collection of data from the start considering the minorities, well-researched model and objective choice, testing that will oversee every element, deployment that matches the testing environment and monitoring of systems along with maintenance that will adapt to the changes in the world.

Senior Principal Researcher at Microsoft Research and Indiana University, Mary Gray, shed light on a slightly different issue which impacts the workforce, but one that has not caught as much public attention as the others. In her presentation, Gray spoke about “ghost work”, referring to “the dismantlement of full-time employment itself and labour conditions that devalue or hide workers”.

Gray described such work as part of the “Dark Side of AI”, citing that “as technology automates some work, it creates new types of work that depend on human expertise”. With this, Gray explained that these new types of work belong to today’s last-mile jobs – structuring/cleaning of training data for AI such as data labelling, content review and telehealth and human-in-the-loop information services which include content moderation, translations captioning and contact tracing.

The climax of the conference would have to be when Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW Sydney, asked a couple of very important questions. “Why”? Why is there all this fuss about AI and ethics and why should people be fussing about it?

With the emergence of AI, various ethical frameworks are also implemented in the government and on enterprises such as Google, Amazon and Microsoft. Professional bodies have also standardised ethical frameworks for AI including IEEE, ISO and OECD.

According to Walsh, we shouldn’t be surprised about all these ethical frameworks being formulated for AI as it is now used extensively and past frameworks are also existing for past technology that may help us in utilising AI. Walsh discussed these ethical principles which should comprise of beneficence, non-maleficence, autonomy and justice.

“All the society should be involved and for a debate to happen we need education. Finally, we need regulations, we need policies. The government should step in and be informed by independent experts”, Walsh added regarding the discussion of fairness in AI.

To conclude, Walsh passed the discussion to Roland Chin, President and Vice-Chancellor of Hong Kong Baptist University, who talked about the ethical questions raised by the various applications of AI that are already being implemented on a wide scale.

Chin mentioned a handful of AI applications that have been growing in prominence since the 2010s, such as IBM Watson, neural networks and self-driving cars, before presenting the ethical issues that they have had to address as we transitioned into an AI-enabled future. The issues included responsibility, self, fair society, creativity, super babies, deep fakes, kids and youth and health. Without a doubt, there will be more issues and hopefully, ethical solutions to those issues, to come.

The frameworks and demands on AI will keep its implementation at bay, ensuring human values and benefits are upheld and prioritised, from now until the future.

(0)(0)