Columnists

Don't allow AI use in sentencing

LAST Saturday’s NST Leader ‘Artifical Intelligence in court’ is timely. Urging caution, it advises careful consideration of the benefits versus the risks.

A local newspaper reported recently that AI "will be used" in a pilot project by the courts in Sabah and Sarawak. Chief Judge of Sabah and Sarawak Tan Sri David Wong said that AI would serve "as a guideline" to judicial officers in determining sentences, thereby ensuring "consistency in sentencing". He added, "ground rules" are now being drafted for the use of the AI application.

The AI pilot project will initially be used for two offences – drug possession under section 12(2) of the Dangerous Drugs Act 1952 and rape under section 376 of the Penal Code.

In many jurisdictions, AI is currently being used in document management, e-discovery, e-billing, contract management and case management. AI is also used extensively in court reporting. When used in conjunction with "automatic speech recognition (ASR)" system, AI enables court proceedings to be recorded and processed quickly and efficiently.

In Los Angeles, its Superior Court has been using AI to handle traffic cases. The public can now interact with 'Gina' (an AI-powered avatar) in the court website to pay a traffic ticket, register for traffic school, or schedule a court date. Installed in 2016, Gina has done more than 200,000 transactions annually.

Research shows that AI and “other forms of advanced algorithms” have been used in many judicial systems around the world. In the United States (US), predictive algorithms are being used to help reduce the load on the judicial system.

In January 2019, a MIT technology review portal stated that under immense pressure to reduce prison numbers, courtrooms across the US have turned to AI in an attempt "to shuffle defendants through the legal system as efficiently and safely as possible".

In California, authorites have embraced AI in an effort to resolve the critical problems of their over-burdened prisons.

In New Jersey, the Public Safety Assessment algorithm assists judges in determining the risk of granting bail to a defendant. AI helps to reduce the costs associated with manual bail assessments.

China today has AI-empowered judges. Proclaimed as the "first of its kind in the world", Beijing has introduced an "internet-based litigation service center" featuring an AI judge called Xinhua for "basic repetitive casework".

According to China court president Zhang Wen, integrating AI and cloud computing with the litigation service system allows the public to gain the benefits of technological innovation.

On the question whether AI can make "good decisions", my colleagues say that AI systems and predictive algorithms rely heavily on the type and quality of the data supplied. As such, they can fall victim to the familiar "Junk in, junk out" syndrome.

They say that whilst AI can be used to perform the "legwork" for our judges, they can never completely replace human judges. There are other issues that must be considered as well. Who will oversee AI judges? Can decisions of AI judges be challenged, reviewed or appealed?

The World Government Summit of 2018 stated rather pessimistically: "The day when technology will become the judge of good and bad human behavior and assign appropriate punishments still lie some way in the future."

As I see it, the consensus is that AI may be suitable in a supportive role (such as compiling data, evidence, authorities, legal precedents, past sentences etc), but the power to make the final decision (guilty or not guilty) and impose sentences (prison term, whipping, fines etc) should be left to the human judges.

Under the EU General Data Protection Regulations (2018), EU residents shall have “the right not to be subject to a decision based solely on automated processing”.

Finally, it should be remembered that AI can go terribly wrong. In Wisconsin, a defendant was found guilty for the crime of drive-by shooting. During interrogation after his arrest, he gave several answers that were entered into the AI system COMPAS. The trial Judge gave the defendant a long prison sentence partially because he was wrongly labled "high risk" by the risk assessment algorithm.

Coded justice, e-justice or predictive justice may be convenient. But there should be limits to its use.


The writer is a former federal counsel at the Attorney-General’s Chambers, and is deputy chairman of the Kuala Lumpur Foundation to Criminalise War

Most Popular
Related Article
Says Stories