The Promise of Science | SGInnovate

SGInnovate Website Maintenance Notice & will be temporarily unavailable from 05 Dec 2022, 22:00 to 06 Dec 2022, 08:00 (SGT).

We apologise for any inconvenience and thank you for your patience during this time.


The Promise of Science

Monday, November 11, 2019

New innovations in the fields of artificial intelligence (AI), robotics, and genetic engineering have incredible potential to improve the human condition in ways we never imagined. Today, these tools are being harnessed to solve malnutrition, provide access to clean water, and make entire countries more efficient.

Left unchecked, however, the same technological advances may develop in unsustainable ways, harming the environment or the people they are designed to help.

The 193 countries of the United Nations have committed to achieving 17 Sustainable Development Goals by 2030. These are all designed to safeguard development and growth. To ensure a safer, fairer world for all, nations and companies must pay attention to the SDGs, adhere to international agreements, and pay close attention to how these new technologies are developed and by whom.

Machine Learning Algorithms Can Save Lives, but Suffer from Inaccurate or Incomplete Datasets

According to Professor Teck-Hua Ho, a Tan Chin Tuan Centennial Professor at NUS and chairman at both AI Singapore and the Singapore Data Science Consortium, AI technology is increasingly being used to save time and make processes more efficient and safer.

“Take traffic, for example,” he says, “In Singapore [and around the world], government agencies have collected extensive amounts of historical traffic data. These [are being] used to train AI models to learn the patterns and behaviours of commuters, creating a safe, reliable and efficient commute experience for everyone.” 

AI also has exceptionally promising opportunities for the healthcare industry. “Healthcare tends to be highly specialised, and hence siloed,” says Homer Pien, Senior VP and Chief Scientific Officer at Philips Healthcare. “We are excited about the ability to use AI to bridge those silos – for example, combining radiology, anatomic pathology, molecular pathology, clinical labs, and clinical history to render an integrated diagnosis that provides a holistic view of the patient.”

He believes there are many significant applications of AI to make the healthcare industry more efficient and successful:

  • To improve data acquisition – for example, by improving the quality of a CT image
  • For clinical decision support – to improve the diagnostic capability of the data, like during identification of suspicious lung lesions in a chest CT scan
  • In workflows – by gathering relevant information automatically
  • To improve operational efficiency and minimise waiting times

Algorithms Aren’t Infallible (Yet)

Limited datasets or faulty assumptions might amplify currently-existing societal stereotypes, causing AI algorithms to develop biases or come to faulty conclusions.

For example, Amazon’s attempt at a resume screening system suffered from major gender bias—because most of the data it analysed were from male employees, the algorithm taught itself that male candidates were preferable, to the point of actually penalising resumes that included the word “women”.

A government seeking to create the strongest, most accurate social grading algorithm could theoretically demand its citizens to share their data with a central authority, says acclaimed writer Yuval Noah Harari, disregarding personal liberties in favour of a constant surveillance system. 

How do we ensure that an AI will do what we really want, while not harming or discriminating between humans in a misguided attempt to do what its designer requested? Who sets values, and who gets to control the algorithms?

Robotics Unlock New Potential in Healthcare, and Our Relationships with Them May Become More Personal

Robots have been used in the healthcare industry since 2000—many clinical robotic surgical systems include a camera arm for high-definition, magnified, 3D views and robotic arms that can control small surgical tools with high precision. 

They are also increasingly being combined and integrated into wearables. Spyder, a robotics wearable from WEB Biotechnology, is the world’s first medical-grade Electrocardiography (ECG) wearable that can transmit data continuously through a smartphone to a cloud database. Artificial intelligence is then applied to the data to access and analyse ECG rhythm abnormalities in real-time, at any time and any location.

Another Singapore-based company, Biofourmis, uses wearable biosensors to predict exacerbation of pre-existing medical conditions before a critical event occurs. The tool’s diagnostic precision and early intervention result in improved health outcomes, lowering healthcare costs.

The Changing Relationship between Humans and Robots

Because robots are more flexible, more precise, and sturdier than humans, they have been used for space travel, deep-sea exploration, warzone navigation, and other areas where human lives might be at risk. 

But robots are increasingly interacting directly with humans, and a new discipline—Human-Robot Interaction—has arisen in an attempt to deal with the ethical and political issues that have resulted due to this new relationship.

Some believe that increasing dependence on robots will lead to ruin. One such example of overreliance on robotic algorithms was the 2018 Lion Air jetliner crash that killed 189 people. Additionally, a study on the effects of automation at work showed that even very skilled people tended to develop overreliance on computers, falling victim to what’s called “automation complacency.” As robots rise, what place might humans have in the future?

This reliance—especially in stressful situations—may eventually result in emotional codependency. Dr  Julie Carpenter, a leading expert on human-robot social interaction, shared, “In the next 100 years, human-robot emotional attachment is something we will negotiate and discuss a great deal. We can all imagine scenarios where participating socially or emotionally toward AI/robots can be considered extreme.”

Could humans one day be over-attached to or overtrusting of robots? Who will determine which human-robot interactions are “appropriate”, and which are not? Though robots are currently considered revolutionary, might they lead to harmful situations we never considered? Several decades ago, plastics were considered a planet-saving alternative to paper – few imagined we would eventually be faced with a global plastic pollution crisis.

The Debate Surrounding Genetic Engineering Continues

Genetic modification has been done indirectly for thousands of years through selective breeding of plants and animals, even before our ancestors knew what genes were. 

Now scientists are able to change or breed for desirable traits in faster and in more precise ways. Genetically-modified crops are more productive and more nutritious, and they can be designed to withstand harsh environmental conditions.

Genetic modification and analysis also has promising potential for humans. Singapore HealthTech startups like Nalagenetics are trying to develop genetic tests that could make precision medicine available for all. These pharmacogenetic tests would identify which medications are most likely to cause adverse drug reactions in individuals and provide recommendations for safer courses of treatment.

Additionally, scientists at the Wake Forest Institute for Regenerative Medicine have used human cells to grow muscles, blood vessels, and even urinary bladders. These lab-grown bladders have been implanted into children and young adults born with defective bladders—a condition that can cause severe kidney damage—and should they work properly, the treatment may save thousands from a lifetime of dialysis.

How Much, If Any, Genetic Modification Should Be Done?

Some consumers feel that modern techniques for genetic modification are unnatural, and thus untrustworthy.  Could an inserted gene have effects that we are unaware of? Could it upset the balance of existing genes? 

Dr He Jiankui shocked the world when he announced that he had performed CRISPR gene-editing on twins in April. He justified his germline intervention with the fact that the father of the twins carried HIV—he only wished to prevent potential infection of HIV. 

But critics fear that irresponsible genetic engineering may result in unprecedented dangers or developments that only manifest several generations later.

How Can We Ensure That Progress is Sustainable?

From addressing world hunger to saving energy and increasing the quality and longevity of life—there are incredible opportunities for technology to drastically improve the human condition. 

At the same time, it is clear that these technological advances need to be managed correctly. Progress in these fields should be supported and, more importantly, guided—by independent watchdog groups, governments, and all stakeholders, including researchers, ethicists, policymakers, patient groups, and representatives from science and medical academies and organisations worldwide. 

There are several ways to ensure that we continue on the right path towards sustainable progress.

Create a Central Set of Rules through Open Discourse

“We need to lay out a strong framework for AI governance and ethics, and to faithfully abide by it,” says Professor Ho. “I’m proud to say that Singapore is one of the first countries in the world to do this. Early this year, IMDA launched a Model AI Governance Framework [...] We are adopting this framework for all our projects at AI Singapore.” 

This living document is the first in Asia to provide detailed, readily implementable guidance to private sector organisations.

International guidelines are already present in other fields as well—statements and discussions fostered by the International Summit on Human Genome Editing set rules for what is and is not acceptable in the field of genetic engineering. Governments and stakeholders must pay attention to these findings, follow them, and stand firmly against those who breach ethical agreements. 

“At Phillips, we have developed policies along these fronts,” says Pien. “We follow these policies for our research and development efforts, work with national organisations to adopt these guidelines and policies as standards and promote adherence to these standards.”

Decentralise Technology

Writer Yuval Noah Harari has already warned us of the dangers that AI and big data pose to democracies and individual freedoms. He suggests that the best solution is to figure out who should own what data. 

Does the data collected about a person’s DNA, brain, shopping preferences, and entire life belong to them – to a corporation, to the government, or to an entire human collective? 

“The best contribution you can make,” he writes, “is to find ways to prevent too much data from being concentrated in too few hands, and also find ways to keep distributed data processing more efficient than centralised data processing.”

Protect Individuals and Address Bias

Professor Ho adds that we “must strictly protect each individual’s privacy in the development and deployment of AI systems”. Vendors must be explicit about who will use the data and in what capacities.  

This is particularly relevant in the field of healthcare: according to Pien, we must also develop protocols for the protection of personal health data using robust security procedures.

Lastly, vendors must be highly cognisant of risk of bias in AI algorithms and provide assurance that this risk is minimised. One way of doing this is by adding more diversified sources of data and performing more rigorous testing.

Involve, Empower and Educate Stakeholders

To combat fear-mongering and propaganda, governments and companies must develop communication strategies that can promote public awareness and stimulate well-informed discussions on technological advances.

“It is also important to continuously train and up-skill our workforce to keep up with
advancements in technology,” says Professor Ho. “This is how we can properly reap the benefits AI and robotics will bring to the economy.”

He believes it will be important to create comfort and encourage people to embrace AI solutions as a way of augmenting, not replacing humans. 

“There still exists a perception that AI will take over people’s jobs, and this is true—for jobs that are 3D, or dull, dirty, and dangerous. These are jobs people do not enjoy doing in the first place,” Professor Ho adds. 

Some jobs will be replaced, yes—but 133 million new job roles will arise that require uniquely-human creativity, judgement, and problem-solving. 

“AI and robotics will take over many mundane and repetitive tasks, and this will free up people to engage in creative, cognitive and social interactions and activities,” he adds, “We should expect richer, more exciting roles for people. One day, these technologies will be our assistants, companions, and co-workers.”

At SGInnovate, we host Deep Tech Summit yearly, bringing together leading minds in Deep Tech to exchange ideas, collaborate and build a strong and sustainable future economy.

Be part of our conversations at the Deep Tech Summit where Prof Ho and Dr Homer will be speaking.

Topics: AI / Machine Learning / Deep Learning, Data Science / Data Analytics, MedTech / HealthTech / BioTech, Startup and Corporate Open Innovation, Sustainability
Industry: Built Environment (USS)

Share this with your network!