Expert Series: Should We Harness AI Or Fear It?

2021-04-27

Ever since computer science pioneer, Alan Turing, first asked “can machines think?” back in the 1940s, humanity’s concerns about machines capable of acquiring the knowledge it needed to evolve have never faded. With potential applications for Artificial Intelligence (AI) now growing at a near exponential rate, how are we humans coping with the technology’s uncertainties?  A renowned expert in the field of ethical use of AI, Prof. Pascale FUNG here sheds some light on the technology’s many benefits and risks. Her biggest concern is people’s ignorance about AI means we may be doing a disservice to both ourselves and AI.

AI creates rather than eliminates jobs

The explosion of AI’s acceptance is making it incredibly easy for industries to become both automated and scalable. While the future may look bewildering to those who lose their jobs – cleaners replaced by sanitizing robots, cashiers superseded by machines etc, Prof. Fung sees such redundancies as a cost of progress.

“Like the industrial revolution before it, the digital age is also creating many new jobs. Take Application Developers as an example. Before the smart phone there was no such job title, now millions worldwide are developing apps,” says Prof. Fung.

“We need to remember that AI is a tool which can improve our quality of life.  We need to work with AI, not against it.”

“As with the various conspiracy theories surrounding COVID-19 vaccines, misinformation and rumors about new technologies such as AI are gaining traction via social media. People wildly speculate on things that don’t exist,” adds Prof. Fung. “Even some of my own family members initially believed the conspiracy theories. People opposed to vaccination are risking other people’s lives.”

“We need to understand that a machine itself does not have consciousness,” Prof. Fung goes on. “It’s sad that people today aren’t willing to listen and learn. If they did so they would quickly work out most of these rumors make no logical sense.”

Prof. Fung says early STEM education is a must for those wishing to leverage the kind of logical and critical thinking skills needed to debunk misinformation in today’s ever more hi-tech world.

The danger of fast tracking invention-application cycle

In dismissing the possibility of AI’s somehow rendering human input obsolete, Prof. Fung argues that the biggest danger lies in insufficient developmental checks and balances.

An AI veteran of 30 years’ standing, Prof. Fung claims that people’s past inadequate awareness of AI meant that researchers like herself had only really began thinking about the technology’s ethical application in recent years.  

“The normal 10-year lag between invention and application was expected to allow researchers to assess AI’s likely implications. However, the life cycle has been greatly shortened nowadays and there is insufficient time for regulations.”

Prof. Fung quotes large language model, GPT-2 systems as an excellent example of the kind of risks that can occur as a result.  “When humans are not in control of this kind of language model, the machine can generate sexist and racist comments and answer questions incorrectly. If it is used to answer medical questions, with the human-like, expert-like responses, this will be a real danger.”

Prof. Fung says such systems’ mistakes are sure to become even more catastrophic should the misinformation provided be amplified on a scalable basis.

As AI becomes more and more autonomous, Prof. Fung cautions that we must never surrender control of the technology’s production and monitoring.

“More enhanced mechanisms and safety measures are essential to control AI systems’ production and possible risk. Nowadays people do a PhD just on this new research topic: how to mitigate potential risks in different AI systems, hence, job opportunities!”Prof. Pascale Fung

Regulatory differences: East Vs West

Moving onto the thorny issue of individual market legal and regulatory requirements, Prof. Fung has spotted a fascinating dichotomy between perceptions on AI in the East and West.

“Western science fiction books and films frequently depict AI as a dystopian force while Asian-originated characters like Doraemon tend to be far more positively portrayed, likeable and cute,” Prof. Fung says.

“Adherents of Eastern cultures such as Buddhism and Shinto believe the universe is harmonious and humans are no superior to other animals or objects. We emphasize more on the unimportance of humans compared to mother nature.”

“Born of the Western Christian belief that humans ‘manage the earth’, religion also plays a part. For example, in my favorite sci-fi movie Blade Runner, the man-made human replicants are engaged in an existential struggle to kill their creator. The religious undertones are very clear here: AI is trying to play God and dominate humans.”  

Due to differences in peoples’ acceptance of technologies such as AI, both Europe and the US have legislated stricter, more firmly worded regulatory frameworks which stipulate the limits to what AI cannot do. Here in Asia, the emphasis is more on shared responsibilities, open collaboration and self-governance.

Transforming healthcare with AI

Looking forward, Prof. Fung anticipates that AI’s wide deployment will pave the way for the rise of precision medicine by leveraging machine learning to identify possible cures for diseases.

“As such machines will be able to examine DNA sequences and leverage their findings to tailor better treatments for cancer patients, so precision medicine is an area that really does need to be more fully explored. This will require lot of data for learning machines to crunch,” says Prof. Fung. 

Prof. Fung adds that before effective machine learning can go ahead, stringent monitoring measures must be put in place and more patients’ will need to consent to the harvesting of their data. While facilitating such developments and applications will require a great deal of work, Prof. Fung remains positive that AI will reshape patient care and benefit both doctors and patients but only if -

“we must keep humans in the loop.”

subscribe
Sign up for our latest news