Columnists

AI code should cover systems that pose a high risk to humans

ChatGPT diagnostics and robotic surgery in healthcare, to customising education to narrow the literacy gap, to tackling climate change and eliminating hunger.

While AI is a force for good, there is a risk that it can turn evil. Look at the massive disinformation spewed to alter our opinions on elections or on what to buy.

Witness, too, the cybercrimes committed through deep fakes of voices and face.

The 1984 movie Terminator and its sequels evoke fears of the end of mankind at the hands of machines gone rogue.

The late physicist Stephen Hawking worried that AI would outsmart humans.

Indeed, it has. In 1997, Garry Kasparov, the world chess champion, lost to Deep Blue, an IBM supercomputer.

In 2017, AlphaGo, a Google supercomputer, beat Kie Jei, the world champion in Go, an ancient Chinese board game with more than a trillion moves.

Scarier is the claim in 2022 by Lemoine, a Google engineer, that the AI he was working with had become sentient, that is, it had gained emotional intelligence and consciousness.

Although his claim was probably false, and Lemoine was subsequently sacked, it shows how a superintelligent machine can persuade a human so much as to lose a lucrative job.

Scale up this problem and we could have AI prompting humans wrongly to disastrous effect.

Given this potential calamity, Yuval Harari, an Israeli professor, argues: "We have just encountered an alien intelligence, we don't know much about it, except that it might destroy our civilisation… We must regulate AI before it regulates us."

Many tech luminaries share a similar sentiment.

Sundar Pichai, CEO of Google, for example, espouses an international agency to oversee AI development. And there is growing consensus across the world for such an agency.

Last year, the Centre for the Governance of AI, University of Oxford, undertook a global poll. Of the 13,000 people in 11 countries who responded, 91 per cent agreed that AI needed to be supervised.

And there is a precedent for such an international regulator. The International Atomic Energy Agency regulates nuclear energy for peaceful purposes while the International Civil Aviation Organisation strives to keep air travel safe.

Similarly, the US Food and Drug Administration ensures that only approved drugs are marketed.

Many countries are developing regulations to oversee the responsible development of AI. Malaysia is following suit.

The Science, Technology and Innovation Ministry hopes to release a code of ethics on AI next month.

In this endeavour, the ministry should be mindful of the following concerns:

FIRST, in its use and deployment, an AI model should underscore the principle of fairness, that is, it is inclusive, free of bias (invariably imported from the datasets the model has ingested), and does not exacerbate inequities in society; and,

SECOND, the government should seek a balance between innovation and accountability in AI regulation.

It should spare oversight of systems that pose a low risk and go for those that are high risk, that is, those that could harm humans and their rights.

This recommendation would mean that designers and deployers of the model should be made responsible for the consequences of operating the model.


* The writer is a former public servant and academic and a columnist of this newspaper for over a decade

Most Popular
Related Article
Says Stories