×
 
 Back to all blogs

How Do We Design Ethical Frameworks to Ensure We Build Cross-Border Human-Centric AI?

 

Mon, 12/16/2019 - 12:00

0%

With each new advance in machine learning and data science, the fear grows that artificial intelligence (AI) will leave a huge number of citizens behind. It is now critical to design human-centric, ethical AI, ensuring we generate unbiased outcomes and preserve the foundations of democracy. Of course, this will involve considering our cultural differences in order to build a cross-border, human-centric AI which still respects our own local ethical perceptions.

The 2019 Live with AI report aims to dispel the popular myths surrounding AI and offer clear, actionable advice for governments, brands and organisations as the AI-powered future draws closer. Part of the work achieved this year has been led by two teams of researchers from both Europe and Asia who examined and debated the critical issues to be taken into consideration to design ethical AI. At a time when European GDPR has demonstrated value across multiple countries, governments need to explore how they define ethics, and consider cultural differences when building relevant regulatory frameworks which serve their interest without forgetting our human rights and our need to protect our human autonomy, privacy, and freedom.

At an event ahead of the Live With AI report’s official publication last month, Professor Luke Kk from the Nanyang Technology University of Singapore and Arno Pons, General Manager of the French think tank The Digital New Deal, who both collaborated on this research, raised the importance of protecting state sovereignty and respecting the rights of users.

AI is only the current visible face of scientific and technological disruption. Big tech companies will encounter the opportunity and temptation to take over numerous fields of progress, whether that be genetics, astrophysics, energy, or all of the above, says Pons, emphasising that AI is thereby a new societal, economic, cultural and political issue we must address without taboo, and perhaps most importantly, without pessimism.

Following this introduction, researchers and board members of the Singapore-based think tank pointed out multiple perspectives that will need to be considered.

First, Graham Matthews, Associate Professor at NTU’s School of Humanities, spoke about the importance of assessing the literary and cultural representation of AI, which helps to shape public perception and is important for garnering acceptance. To many authors, AI looks like the intrusion of numbers into the human sphere. We think of numbers as neutral or objective, but in fact they attract fantasy; think of 'lucky number 7' or the Chinese superstition surrounding the number 4 (四), which is homophonous with the Chinese verb 'to die' (死). Matthews argued that it is precisely when we think we are free of ideological bias that we are most susceptible to fantastical thinking. He concluded that while humanities scholars could tell us very little about AI itself, they are very well equipped to analyse and critique cultural perceptions and portrayals of AI.

Second, Hallam Stevens, Associate Director of NTU Institute of Science and Technology for Humanity, outlined how one of the keys to making AI responsive and responsible for social needs means creating systems and algorithms that are transparent. That is, sufficient information about how an algorithm or system works needs to be publicly available and understandable. If AI systems are allowed to make increasingly important decisions for or about us, then we need to know something about the basis on which such decisions are made. This is critical for fairness, accountability, and democracy. At the moment, some AI systems lack transparency either because they are proprietary or because they are too complex for most people to understand. Stevens explained that some of these problems can be solved by increased education and public knowledge about AI, but ultimately nations may also need to create a body to oversee and regulate algorithmic behaviour on behalf of the public (a Securities and Exchange Commission for AI). In an extreme case, as a society we may need to decide that the non-transparency of machine learning algorithms may be incompatible with democracy, and therefore that they should not be used for certain types of decision making.

Third, Associate Professor Andrés Luco of the NTU School of Humanities discussed the relationship between AI, surveillance, and the human right to privacy. His key message was that the human right to privacy is in danger of being violated by government surveillance regimes that use AI technologies. Luco cited a recent report from the Carnegie Endowment for International Peace, which mentioned that at least 75 out of 176 countries around the world are actively using AI technologies for surveillance purposes. In some cases, the surveillance systems are being used in invasive ways to track people’s movements, their faces, and even their ethnicity.

Finally, Assistant Professor Althaf Marsoof of the Nanyang Business School at NTU addressed the issue of AI and legal responsibility. In his presentation, he pointed out that when AI-driven innovation becomes increasingly infused into day-to-day activities, especially those entailing high risk, it is crucial that our existing laws provide the necessary framework for the allocation of legal responsibility in the event things go wrong. However, he added that, our conventional understanding surrounding the legal principles that deal with primary and joint liability in respect of both criminal and civil conduct does not easily extend to "wrongs" that might be committed by a device or machine running on AI – since it is posited that AI possess the capacity to make independent decisions that are no longer bound by its developer's original algorithms. In addition, he highlighted that, despite the challenges, it is important for the law to "catch up", as unless those who build and make use of AI are held accountable in appropriate circumstances, the law will lose its regulatory grip over AI-driven innovations.

Following such debate, Professor Li Haizhou from the Department of Electrical and Computer Engineering in the National University of Singapore wrapped up the round table by emphasising the importance of not over-estimating the power of AI, which is still a long way from the general AI that many people are speculating about. As Professor Li Haizhou said: “A man of wisdom knows what he knows and what he doesn’t know. AI only knows what it knows, it doesn’t know what it doesn’t know. Therefore, it is only a half complete intelligence. With the advent of Deep Learning, we have enabled AI to act like humans as regards speech recognition, face recognition, and chess playing. But we have made little progress enabling AI to think like humans, that will be the next frontier of AI.”

Leaders from across Europe and Asia may not have the exact same definition of "ethics”. Nevertheless, all Live with AI board members, regardless of geography, share the same ambition, and the same belief that our cultural differences provide us with a chance to define a universal human need

Live with AI gathers thought leaders from France and Singapore to lead research projects on the positive impact of AI to our society. SGInnovate is a partner of this initiative. 

You may download the latest white paper here.

Technology:
Previous

Teaching Microscopes to See - A Conversation with Professor Duane Loh

Nov 11, 2022

Next

Deep Tech Daredevil: Entrepreneurial Scientist Benjamin Tee

Nov 11, 2022