Letters

How safe are self-driving vehicles?

IMAGINE you are driving on a trunk road. On your left is a ravine, and  right ahead, you see a boy crossing the road. On the other side of the road is an oncoming lorry. What would you do?

You would slam on the brakes, but at the same time, you would swerve either to the right or to the left to avoid hitting the boy.

You would likely do this with little regard to your own safety. This is a natural reaction for any driver.

A self-driving robotic car, on the other hand, operates using algorithms. Sensors mounted on the car will sense the environment and send signals to a computer. The computer software algorithm will decide the action to be taken.

The action will depend on the engineer who designed and developed the algorithm.

If the engineer decides to give priority to the safety of the driver, the car will drive straight into the boy.

If the engineer feels that a boy’s life is more precious than that of the driver, he will programme the algorithm to swerve while hitting on the brakes.

In other words, the engineer will decide for the driver how the car should react to a situation. The driver has no control over this.

A report published in The Guardian online on May 8 said an Uber self-driving car that killed a woman crossing the street “detected her but decided not to react immediately”.

“The software which decides how it should react was tuned too far in favour of ignoring objects in its path.”

Such statements give us the impression that a machine can decide on its own. This is a fallacy. It is the engineer or software programmer who decides how the car should react in a situation.

Even if artificial intelligence is used in a robotic vehicle to mimic human intelligence, the experience of pain felt by the driver when a pedestrian is hit by a car can never be programmed into a machine.

It will take a few decades or even centuries to design a robotic vehicle that feels and behaves like a human.

Even if robotic vehicles now have pedestrian-friendly or driver-friendly drive modes, it is risky to travel in a vehicle that depends on sensors and algorithms as the failure of a sensor can result in death.

It is time for authorities to deny profit-minded companies from testing their machines on roads designed for and used by humans.

R. Mani Maran, Nibong Tebal, Penang

Most Popular
Related Article
Says Stories