×
 
 Back to all blogs

Designing a Moral Compass for AI

 

Mon, 12/30/2019 - 12:00

0%

Two experts on AI governance discussed how explainability and transparency are critical for ethical AI use in finance.

With its promise of speed, accuracy and consistency, the role of artificial intelligence (AI) in the finance industry has extended to major financial practices such as credit approval and risk assessment. In an October 2019 report, the World Economic Forum even predicted that AI would undoubtedly become a “dominant force in the industry”.

However, AI remains poorly understood by most of society, and the report also cautioned that financial institutions, regulators and policymakers seeking to capture the AI advantage are entering into uncharted waters. It urged firms to examine issues such as bias, fairness and explainability, to ensure that risks are reasonably managed for all stakeholders involved.

In November 2018, the Monetary Authority of Singapore (MAS) released a set of principles to promote fairness, ethics, accountability and transparency (FEAT) in the use of AI and data analytics in finance. The document also provides guidance to firms on strengthening internal governance around data management and use.

To promote the FEAT principles, MAS is working with financial industry partners to create a framework for financial institutions to promote the responsible adoption of AI and data analytics. The consortium is currently made up of 17 members, comprising MAS, SGInnovate, EY and 14 financial institutions.

Drawing on our networks and experiences, we are hosting an Expert Series that brings together leaders in the AI, financial and regulatory sectors to share and discuss various considerations around the usage of AI with members of the consortium. The first session of this series was held last month and comprised two talks followed by a Q&A session.

Getting to the Heart of AI Decisions

Opening the session was Dr Nicolas Chapados, co-founder and chief science officer at Element AI. In his talk, “Raising AI Standards of Practice with Explainability”, he discussed the importance of explainability in AI systems and its challenges from a technical standpoint.

Explainability shows the logical reasoning behind an AI’s decisions, which is crucial to help us understand whether these decisions are reasonable. “AI explainability forges trust between firms and users. It also brings to light issues of bias, for instance, by exposing whether the decisions made about certain groups of people are unfair or wrong,” Dr Chapados said.

However, there is no universal standard of what makes a good explanation, Dr Chapados pointed out. “We don’t know what differentiates, quantitatively, a good explanation from a bad explanation. We don’t yet have standards, metrics and benchmarks to characterise explainability.”

But perhaps the fact that explanations need to be customised for different audiences could be a useful starting point, he said. For example, an explanation tailored to the designers of an AI system would be vastly more complex and technical than an explanation for general users of that system.

What this calls for is actually different degrees and levels of explanation, most importantly, for the development of a language in which to communicate explanation.

Nicolas Chapados

“All of this is to say the road is not paved and there’s still a lot of work to do,” Dr Chapados said. He encouraged firms to develop a new language for users and machines to communicate with each other more effectively.

Laying Down the Ground Rules for Transparency

Shifting gears from the technical considerations of implementing explainability, the session continued with a talk by Professor Simon Chesterman, dean of the Faculty of Law at the National University of Singapore. In his talk, titled “Artificial Intelligence and the Problem of Opacity”, he examined how to improve FEAT compliance from a regulatory perspective.

“When you’re thinking about credit in the financial world, it is not just a question of optimising returns,” Prof Chesterman pointed out. While only lending money to privileged groups ensures that a firm makes as much money as possible, “as a society that is clearly not what we should be doing,” he said. “That’s not the purpose of credit, and the reason for regulation is to change those incentives, making sure you can’t discriminate on the basis of protected categories at least.”

He highlighted the cautionary example of Amazon attempting to use AI to screen job applicants. However, their model had been trained on data from the past decade, during which more men were hired over women. The AI consistently favoured male applicants over women, thereby perpetuating gender bias in the workforce. In the end, Amazon had to get rid of their system. “Basically, if you’ve been biased in the past, then you’ll be biased in the future if you rely on that historical data,” he said.

However, Prof Chesterman also illustrated the other side of the coin—well-designed AI systems can unmask human bias rather than perpetuate it. For instance, a bank manager could deny a loan application to someone who had good credit scores out of bias against the applicant’s gender or ethnicity, but the manager likely would not admit to this. “The good thing about an AI system is you can re-run this decision, do a simulation with different variables, and work out exactly what the problem was,” he said.

In short, AI has the potential to become a force for good, so long as care is taken to enhance transparency and reduce opacity. Ensuring “input legitimacy” would be a good starting point, he suggested. “For example, you can have a credit scoring mechanism where the applicant is not allowed to be asked about whether they are a man or woman, or their ethnicity.”

Another way to improve transparency is through “output legitimacy”, by auditing the AI’s decisions to ensure it upholds at least the same ethical standards to which human decisions are held. “For instance, we want to make sure that credit is available to people across the board, for example, by setting one of the priorities to be equal access. Not just equal opportunity, but equity, in terms of outcome,” Prof Chesterman concluded.

At SGInnovate, we build thought leadership and catalyse conversations around AI and ethics. Be part of our conversations!

Technology:
AI
Previous

Singapore Women in Tech: Championing Future Generations of Females in STEM

Nov 11, 2022

Next

Deep Tech Daredevil: Entrepreneurial Scientist Benjamin Tee

Nov 11, 2022