Nation

Navigating the AI minefield with urgent laws and solutions, CAP urges

GEORGE TOWN: The Consumers' Association of Penang (CAP) has called for the implementation of laws governing Artificial Intelligence (AI) akin to the European Union's AI Act.

In a statement, CAP president Mohideen Abdul Kader stressed on the need for transparency and accountability from AI development companies to ensure consumer confidence in AI integration into daily life.

"Stringent regulations are essential to safeguard human rights," Mohideen stated, marking World Consumer Rights Day.

World Consumer Rights Day, an initiative accredited by the United Nations and observed annually on March 15 since 1983, focuses this year on "Fair and Responsible AI for Consumers."

Mohideen highlighted the ubiquitous nature of AI in modern life, noting that AI challenges are global, not confined to Malaysia.

He referenced the EU's AI Act, which categorises AI models and applications based on their potential public risk, imposing stricter regulations on higher-risk applications and banning those with an "unacceptable risk."

He noted the Malaysian government's efforts to establish an AI governance framework and code of ethics, slated for completion this year, to form the basis for Malaysian AI regulations.

Mohideen acknowledged that while AI has been around for some time, its recent proliferation in everyday consumer applications has highlighted its integral role in society.

He stressed the importance of identifying and managing the dangers and problems AI poses to the public, citing the rise of scams utilising deepfakes generated by AI.

"Last year the term 'scamdemic' was coined as our country had experienced a major increase in scams. This includes locals being scammed or scammers operating in Malaysia to scam people overseas.

"The latest weapon in these scammers' arsenal is the use of deepfakes that is generated by AI to scam people — these deepfakes include the faked voice of someone the victim knows either asking to borrow money for an emergency or even that they have been kidnapped and to transfer money as ransom into the kidnappers (scammers) bank account.

Mohideen also cautioned against the "hallucination" phenomenon in Large Language Models (LLMs) like ChatGPT, where they may invent answers with false information when unable to provide a valid response.

On this note, he warned of the potential dangers of this phenomenon, especially in educational and research settings.

He also raised concerns about the lack of transparency regarding AI systems' data collection practices and decision-making processes.

Mohideen emphasised consumers' right to understand how AI models operate and influence them.

"Misinformation is a significant concern," he said, highlighting how AI algorithms can spread misinformation based on individuals' search histories, regardless of factual accuracy, potentially influencing public opinion or behaviour.

Most Popular
Related Article
Says Stories