×
 
 Back to all blogs

Is It Possible for AI to Be Ethical?

 

Wed, 01/23/2019 - 12:00

0%

Given the complexities of human morality, it might be impossible to design ethical AI; but developers should nonetheless always be alert to potential biases.

The rise of artificial intelligence (AI) has made the classic trolley problem in ethics a favourite at tech conferences—but in an updated form. In the context of driverless vehicles, the issue arises when the vehicle’s AI algorithm is faced with a dilemma of whether to hit a young child or an elderly person. What does it mean to code such a decision into a computer algorithm? What is the ‘best’ choice? 

“In such a lose-lose situation, I personally don’t know how even a human can make a so-called ‘best choice’,” said SGInnovate Founding CEO Mr Steve Leonard, who moderated a panel discussion.

The panel comprised experienced industry and regulatory professionals: Mr Richard Koh, Chief Technology Officer of Microsoft Singapore; Dr David Hardoon, Chief Data Officer and Head of the Data Analytics Group at the Monetary Authority of Singapore; and Mr Yeong Zee Kin, Assistant Chief Executive (Data Innovation and Protection Group) at the Infocomm Media Development Authority of Singapore and Deputy Commissioner of the Personal Data Protection Commission. 

Mr Yeong, whose work includes developing forward-thinking governance on AI and data, concurred with Mr Leonard, saying, “When we obtained our driver’s license, we were never asked to answer such a question. So why should we expect an AI to be able to answer it ‘correctly’?” 

However, he also pointed out that there is a certain level of responsibility for companies producing the driverless cars. “At the end of the day, the system is designed by humans. If there are known high-risk scenarios involved, we shouldn’t be leaving it to AI models to make decisions based on data sets. The solution? Design it such that you can be in control—and manage the risk by narrowing the window for it to occur.” 

For example, the autonomous car could be designed to slow down when entering the school zone, Mr Yeong suggested. “This way, you wouldn’t have the question about whether it can stop in time, or which individual to hit,” he said.

Ethics is Contextual

Although the humans who develop AI are partially responsible, decisions are far from clear-cut as different humans may reach different conclusions about what is ethically right, Dr Hardoon said, citing an experiment dubbed the Moral Machine, which was launched by researchers at the Massachusetts Institute of Technology in 2016.

Instead of asking AI systems to choose between running over a young person or an old person, a survey went out to 40 million people around the world. “What they found was that [people from] Asian cultures would choose to hit the younger person. I am not saying that it is right or wrong, but given the position of a choice, that was what they chose. However, for the Americans, the results were the exact opposite.”

This adds another dimension to the problem, because it shows that ethics is not universal, but culturally situated and contextual.
Dr Hardoon

Bringing the discussion another step further, Dr Hardoon puts across the point that when humans consider a decision to be ‘ethical’, it is actually a two-step evaluation. “First you make a decision, then you decide whether it is ethical or not, within the context of the culture you are in. But an AI doesn’t work this way. It makes a decision based on the statistics and data that it has, so it is not in a position to say whether a decision is ethical or not. We cannot hold an AI system to a moral standard.”

 


(L to R): Steve Leonard, SGInnovate Founding CEO; Richard Koh, Chief Technology Officer, Microsoft Singapore; Yeong Zee Kin, Assistant Chief Executive (Data Innovation and Protection Group) of IMDA and Deputy Commissioner of the Personal Data Protection Commission; and Dr David Hardoon, Chief Data Officer, Data Analytics Group, Monetary Authority of Singapore

 

AI Ethics versus Ethical AI

While the panellists agreed that it is impossible to have an ‘ethical AI’, they believe that ‘AI ethics’ is a discussion that we should constantly engage in. 

These two are entirely different things, Mr Koh stressed. “AI ethics is about making sure there are no biases when building the algorithms, and this is something that we need to stringently make sure that we achieve. On the other hand, to speak of an ‘ethical AI’ means that we expect it to be able to make moral decisions, which I don’t think an algorithm is capable of,” he explained.

In Microsoft’s case, Mr Koh said, everyone surrounding the technology and not just the programmers are educated on AI ethics. When forming teams to create technological products, the company also takes great care to ensure diversity within the members, which ensures multiple points of view and difficult questions are being asked. 

RELATED ARTICLE: Helping AI Embrace Empathy

There are lots of negative connotations around the word ‘bias’, but it is not inherently a bad thing. For humans, the ability to have a bit of bias—or preferential treatment of one thing over another—is what enables us to make a decision.
Dr Hardoon

Lastly, in view of all the difficulties, Mr Yeong advised developers and companies to first identify whether biases exist in the data, and whether there are hidden patterns embedded. By understanding the data that they are using, they can then proactively address these biases and prevent the AI from making unintentional discriminatory decisions, he said.

“If a company makes an intentional decision to discriminate, then they need to be prepared to defend it if the regulators are unhappy with them. But it is the unintentional ones that we are talking about here—how do we minimise them, and how can we address them?” Mr Yeong said.  

With its vast potential, AI is set to transform our lives for the better. With this technology, there is a new frontier for ethics and risk assessment. We are always interested to explore and debate on such considerations for our society, particularly for Singapore. Read more about our AI stories here

Technology:
AI
Previous

What It Takes To Build a Sustainable Deep Tech Economy in ASEAN

Nov 11, 2022

Next

How to Commercialise a Piece of Research

Nov 11, 2022