Columnists

Facebook's secret rule book for global political speech

IN a glass conference room at its California headquarters, Facebook is taking on the bonfires of hate and misinformation it has helped fuel across the world, one post at a time.

The social network has drawn criticism for undermining democracy and for provoking bloodshed in societies small and large. But for Facebook, it’s also a business problem.

The company, which makes about US$5 billion in profit per quarter, has to show it is serious about removing dangerous content. It must also continue to attract more users from more countries and try to keep them on the site longer.

How can Facebook monitor billions of posts per day in over 100 languages, all without disturbing the endless expansion that is core to its business? The company’s solution: a network of workers using a maze of PowerPoint slides spelling out what’s forbidden.

Every other Tuesday morning, several dozen Facebook employees gather to come up with the rules, hashing out what the site’s two billion users should be allowed to say. The guidelines that emerge from these meetings are sent to 7,500-plus moderators around the world.

The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognised or acknowledged by the company itself.

The New York Times was provided with more than 1,400 pages from the rule books by an employee who said he feared the company was exercising too much power, with too little oversight — and making too many mistakes.

An examination of the files revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.

Moderators were once told, for example, to remove fundraising appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook’s internal list of banned groups. In Myanmar, a paperwork error allowed a prominent extremist group, accused of fomenting genocide, to stay on the platform for months. In India, moderators were mistakenly told to take down comments critical of religion.

The Facebook employees who meet to set the guidelines, mostly young engineers and lawyers, try to distill highly complex issues into simple yes-or-no rules. Then the company outsources much of the actual post-by-post moderation to companies that enlist largely unskilled workers, many hired out of call centres.

Those moderators, at times relying on Google Translate, have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day.

Moderators express frustration at rules they say don’t always make sense and sometimes require them to leave up posts they fear could lead to violence. Facebook executives say they are working diligently to rid the platform of dangerous posts.

“It’s not our place to correct people’s speech, but we do want to enforce our community standards on our platform,” said Sara Su, a senior engineer on the News Feed. “When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.”

The guidelines for identifying hate speech, a problem that has bedevilled Facebook, run to 200 jargon-filled, head-spinning pages. Moderators must sort a post into one of three “tiers” of severity. They must bear in mind lists like the six “designated dehumanising comparisons”, among them comparing Jews to rats.

As detailed as the guidelines can be, they are also approximations — best guesses at how to fight extremism or disinformation. And they are leading Facebook to intrude into sensitive political matters the world over, sometimes clumsily.

Increasingly, the decisions on what posts should be barred amount to regulating political speech — and not just on the fringes. In many countries, extremism and the mainstream are blurring.

In the United States, Facebook has banned the Proud Boys, a far-right pro-Trump group. The company also blocked an inflammatory ad, about a caravan of Central American migrants, that was produced by President Donald Trump’s political team.

In the absence of governments or international bodies that can set standards, Facebook is experimenting on its own.

Facebook’s most politically consequential document may be an Excel spreadsheet that names every group and individual the company has quietly barred as a hate figure. Moderators are instructed to remove any post praising, supporting or representing any listed figure.

Countries where Facebook faces government pressure seem to be better covered than those where it does not. Facebook blocks dozens of far-right groups in Germany, where the authorities scrutinise the social network, but only one in neighbouring Austria.

The bans are a kind of shortcut, said Sana Jaffrey, who studies Indonesian politics at the University of Chicago. Asking moderators to look for a banned name or logo is easier than asking them to make judgment calls about when political views are dangerous.

But that means that in much of Asia and the Middle East, Facebook bans hard-line religious groups that represent significant segments of society. Blanket prohibitions, Jaffrey said, amount to Facebook shutting down one side in national debates. And its decisions often skew in favour of governments, which can fine or regulate Facebook.

One hurdle to reining in inflammatory speech on Facebook may be Facebook itself. The platform relies on an algorithm that tends to promote the most provocative content, sometimes of the sort the company says it wants to suppress.

Facebook could blunt that algorithm or slow the company’s expansion into new markets, where it has proved most disruptive. But the social network instills in employees an almost unquestioned faith in their product as a force for good.

At company headquarters, the most fundamental questions of all remain unanswered: What sorts of content lead directly to violence? When does the platform exacerbate social tensions?

Rosa Birch, who leads an internal crisis team, said she and her colleagues had been posing these questions for years. They are making progress, she said, but will probably never have definitive answers. NYT

Most Popular
Related Article
Says Stories