Leader

NST Leader: The perils of machine intelligence

THANKS to artificial intelligence (AI), an intelligence revolution is in the making. Since it is in its early days, it doesn't deserve the upper case like the Industrial Revolution.

But it will get there, with all its promises and perils. Right now, it has been a story more of perils than promises.

AI has made deepfakes — the manipulation of images, texts and voices — possible at a scale and speed never seen before. No one is spared. Prime ministers to corporate leaders, and everyone in between, are the targets. Ask Singapore.

Deepfake there went off the charts last year, registering an unheard of spike of more than 500 per cent, according to one media report. Are we safe? That is not the right question. How to be, is.

The quick answer is to get rid of evil. But evil isn't the invention of AI or even the Internet. It started with the first evil man, whoever that was. Inaction then has made evil hard to eliminate.

But blaming the past will bring no good to the present or the future. We must act in the here and now. History tells us evil can't be eliminated. It can only be contained.

There are at least three ways to contain the evil of deepfakes. One is by passing robust laws. Two is by getting platform owners to better police deepfakes. Three is getting people to be better at believing what they are "seeing". Start with robust laws. Such laws must have three targets: the makers of AI models, the platforms that host them and, of course, those who engage in deepfakes.

This is best done with the greatest of effect by the United States, the home of most of the makers of the AI models and digital platforms. As for the containment of the deepfake rogues, it is best done at the global level. Failing which it must be done at the very least at a regional level.

The European Union is showing the way for laggards like Asean to follow.

But the problem with legislation is that it doesn't have the imagination of technology. The box must be there before it can go outside of it, so to speak. Catch up is hard to do.

This is where platform owners, our second point, come in. Being in control of their platforms, they are best placed to police them. They must not wait to be told to dismantle deepfakes. This is not hard to do for them.

They have the technology. Waiting must come with punishing penalties. The EU is good at it, a lead that the US must follow instead of just focusing on antitrust laws. Breaking up companies doesn't decimate deepfakes.

Going after host rogues does. Finally, the point about seeing and believing, our third way. Awareness isn't just a job for the government.

People, too, must teach themselves how machine manipulation works. Seeing is believing isn't always true even in the analogue world. Think trickery. What more in the digital world.

AI has inadvertently — or advertently? —  made it easier for computers to produce real-looking images and real-sounding voices.

As we learned how to avoid trickery in the physical world, we must learn how to avoid fakery in the digital world, albeit deepfakes are of a different order of magnitude.

Most Popular
Related Article
Says Stories