Columnists

Proliferating 'news' sites spew AI-generated fake stories

A sensational story about the Israeli prime minister's psychiatrist exploded online, but it was AI-generated, originating on one of hundreds of websites researchers warn are churning out tech-enabled fiction masquerading as news.

Propaganda-spewing websites have relied on armies of writers, but generative artificial intelligence tools offer a cheaper and faster way to fabricate content that is often hard to decipher from authentic information.

Hundreds of AI-powered sites mimicking news outlets have cropped up in recent months, fuelling an explosion of false narratives, about everything from war to politicians, that researchers say is stoking alarm in a year of high-stake elections around the world.

"Israeli prime minister's psychiatrist commits suicide," still tops the list of "popular articles" highlighted on Global Village Space, a Pakistani digital outlet, after it made an online splash in November with baseless claims about a suicide note blaming Netanyahu.

A "substantial portion" of the site's content, including this article, appears to be scraped from mainstream sources using AI tools, according to an analysis by NewsGuard, a United States-based research organisation that tracks misinformation.

After scanning the site for error messages specific to content produced by AI chatbots, NewsGuard said it found significant similarities between the yarn about Netanyahu's "psychiatrist" to a fictitious 2010 article on a satirical website.

NewsGuard analyst McKenzie Sadeghi said when she prompted ChatGPT, from Microsoft-backed OpenAI, to rewrite the original article for a general news audience, the result was very similar to the article on Global Village Space.

"The exponential growth in AI-generated news and information sources is alarming because these sites can be perceived by the average user as legitimate, trustworthy sources of information," Sadeghi said.

The fabricated article, which came as Netanyahu presses war against Hamas militants in the Gaza Strip, ricocheted across social media platforms in multiple languages, including Arabic, Farsi and French.

A handful of sites published obituaries of the fictional psychiatrist.

The falsehood also featured on a television show in Iran, Israel's arch-enemy.

NewsGuard has identified at least 739 AI-generated "news" sites spanning multiple languages that operate with little to no human oversight and come with generic names such as "Ireland Top News".

But even that list is probably "just the low-hanging fruit", said Darren Linvill from Clemson University, the United States.

Linvill is among the university's disinformation experts who found Russian-linked websites mimicking news and pushing Kremlin propaganda about the war in Ukraine ahead of the US presidential election in November.

They include DC Weekly, which NewsGuard said uses AI to rewrite articles from other sources without credit.

This site — which appears to be owned by John Mark Dougan, a former US marine who fled to Russia — has published a slew of false claims, including that Ukrainian President Volodymyr Zelensky bought two luxury yachts worth millions of dollars with American aid money.

Illustrating the power of AI-led misinformation to influence policy decisions, some US lawmakers echoed the false narrative amid a crucial debate about aid to Ukraine.

"Auto-generated misinformation is likely to be a major part of the 2024 elections," New York University Professor Gary Marcus said.

"Scammers are using (Generative) AI left, right and centre."

The AI-generated content populating websites such as DC Weekly helps "to create a sort of camouflage" that lends more credibility to their false stories penned by humans, Linvill said.

These websites underscore the potential of AI tools — chatbots even more than photo generators and voice cloners — to turbocharge misinformation while eroding trust in traditional media, researchers say.

The revenue model for many of these websites is programmatic advertising, which means that top brands may unintentionally end up supporting them, while it may be difficult for governments to clamp down for fear of breaching free speech protections, researchers say.

Linvill said: "I am particularly concerned about its use by for-profit companies. If we don't stop and pay attention, it's just going to further erode the line between reality and fiction that is already so blurry."


* The writers are from Agence France-Presse

Most Popular
Related Article
Says Stories