From job killer to killer robot, artificial intelligence (AI) increasingly has come under the spotlight for its potentially adverse impact on human lives. Singapore, however, is advocating the need to hold off judgment whilst the technology continues to evolve.
Whilst not a new concept, AI in recent years has garnered significant interest due to the convergence of three key factors, said S. Iswaran, Singapore's Minister for Communications and Information and Minister-in-charge of Trade Relations.
First is the ability for AI to amass, organise, and use large volumes of data. The second factor is that computing power in large quantities also has become more available and at lower costs. And, together with more robust machine learning capabilities, these factors have come together to fuel renewed interest in AI, said Iswaran, who spoke at the Bloomberg LIVE's Sooner Than You Think forum on Thursday in Singapore.
Elaborating on the country's efforts in this space, the minister said Singapore was focused on verticals that were relevant to the nation and, hence, on developing applications that could be scaled locally, regionally, and worldwide. These domains include healthcare, education, and transport, and its initiatives encompass research and development work, skillsets and training, and working with the private sector to build applications, he said.
In healthcare, for instance, he noted there was scope for data and AI to be tapped to augment physicians' delivery of healthcare and in managing chronic diseases such as diabetics and hypertension. Core to this was the nation's central database of medical records, providing the data needed to train the AI and machine learning systems.
Hon Hsiao-Wuen, corporate vice president of Microsoft Research's Asia and Asia-Pacific research and development group, also pointed to the potential for AI to improve the quality and reduce the cost of healthcare. The technology could be used further to stave the spread of infectious diseases, said Hon, who spoke to ZDNet on the sidelines of the forum.
For instance, he said Microsoft was working with Pfizer in China to use image recognition as a way to more quickly identify and detect fungal infection. Patients typically would need to go to the hospital to seek treatment, but this could lead to a further spread of the virus and put others at risk of infection since it would take a couple of days before test results could determine the type of disease, he said.
Computer-aided diagnosis of fungal infections could significantly speed up the time needed to identify the illness and eliminate the need for patients to enter a hospital for diagnosis, nipping its potential to spread.
Asked about Singapore's low adoption of AI for diagnosis, he said this could be due to concerns about reliability, responsibility, and liability. And this was not necessarily a bad thing in healthcare. He suggested that hospitals could put in place a layer of human certification to limit false alarms and issues regarding reliability.
According to Iswaran, Singapore differentiated itself by its ability to organise and bring together different industry players "in a manner that's focused and efficient". It also was able to marshall the data and resources so these could be used in "a careful way" and aimed at resolving key issues, he said.
Asked how it dealt with concerns about privacy and security as citizen data was shared with the private sector, the minister pointed to the need to find a balance between legitimate concerns around privacy and the legitimate use of data. This was necessary because data could be used to serve a wider public good as well as individuals.
Acknowledging that there were tensions about AI and the use of data, he said Singapore is focused on identifying tools that could be used in both private and public sectors, in addition to identifying relevant safeguards that could be implemented to better assure people with privacy concerns.
Iswaran said this is why trust is key and underpins everything, whether it was data or AI. "Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality," he said.
Hon concurred, echoing the minister's call to build trust. He noted that Microsoft observed six principles that guided how it conducted business with others, and was necessary as an industry to encourage people to trust technology in general. These principles included the need to respect local regulations and sovereignty, accountability, fairness, and safety.
Others, in fact, called for complete trust and for humans to let AI take over some tasks completely.
Nielsen's CEO and chief diversity officer David Kenny pointed to how the research firm had used machines to predict the weather. The AI continued to improve over time and reduced its error rate of 12% to 4% as it got smarter about its predictions, Kenny said.
More interestingly, these projections were able to improve as humans were taken out of the equation. In fact, 75% of the time meteorologists meddled and changed the AI predictions, they made it worse, he added, noting that "false" data was being introduced into the algorithm if humans were allowed to intervene.
Kenny explained: "Machines are actually better at predicting what people are going to watch. What we have to train humans on is to trust the machines, don't override them even if you don't like the answer...and instead be creative. I think jobs will be much more interesting when we let machines do the grunt work [and] we can focus on innovating."
Regulation still necessary despite rapid technology change
But with AI technology evolving so rapidly, discussions have turned to whether regulations are able to keep pace.
Iswaran noted that regardless of whether they could, a framework is necessary to instill confidence that AI can be applied in a responsible and ethical way. Adding that this could take the form of legislation, guidelines, or international norms, he said the absence of such frameworks could end up limiting the potential of AI because it could lead to a sharp pushback from the public.
Regulations, too, are critical to easing security concerns about cross-border data transfer. At the same time, however, these should not curtail the flow of data from which valuable insights could be extracted, the minister said.
In this aspect, he urged the need for regional and international dialogue about rules to manage cross-border data flow.
Singapore in June introduced a framework designed to resolve challenges businesses typically faced when sharing data assets, such as the need to ensure regulatory compliance and a lack of standardised methodologies and trust with whom they shared data. Called Trusted Data Sharing Framework, it aimed to facilitate data-sharing to drive the development of new products and services as well as establish consumer confidence that their data would be protected.
In addition to building frameworks focused on trust, Iswaran said businesses should also observe key principles when building AI products, which he said should be human-centric, explainable, and transparent.
There also were calls for less regulation so that the technology could be given room to evolve. Speaking at a panel discussion, Koh Soo Boon, founder and managing partner of VC firm iGlobe Partners, noted that it was difficult to determine if good or bad would come out of a developing technology and, and with AI still nascent, governments also would not know what rules to apply.
Koh said the industry should be allowed to grow and if problems did emerge later on, it could self-correct or self-regulate to address these issues.
Steve Leonard, founding CEO of SGInnovate, also echoed the need for trust and explainable AI, whilst noting that the technology was an ongoing development and important concepts would surface along the way. It then would be ineffective to attempt to react to issues in advance, Leonard said.
He said societies have to be open to the idea that this concept was "imperfect" and know that some people would "misbehave", but that they could be addressed with rules and guidelines. Otherwise, society could miss opportunities that tap AI for solving real-world problems, he noted.
With AI discussions now dividing most opinions into two camps -- the "dystopia and utopia" -- Iswaran ultimately noted that the use of AI would augment human existence and capabilities and enhance human life.
"From our perspective, every technology change and revolution really has led to existing work practices being enhanced, certain practices being eliminated, and new areas being created," Iswaran said. "We're at beginning of the [AI] evolution...so we need to watch this space and hold back on judgement [just yet]."
This is also why closer collaboration between the private and public sectors is necessary so both sides could work together to develop "sensible" guidelines on the use of AI, he said.
Hon also pointed to the need for more collaboration to drive the responsible use of AI, including with competitors and specialists outside the field of IT such as anthropologists and psychologists.
Share this with your network!