Facebook complicit in genocide of Rohingya

By Sigal Samuel
By its own admission, Facebook has a Myanmar problem. Social media users in the country used the platform to incite violence against the Rohingya, a mostly Muslim minority group in the Buddhist-majority country. In 2017, hundreds of thousands were displaced, thousands were killed, and hundreds of villages were burned to the ground. It was, the United Nations said, “a textbook example of ethnic cleansing.”
After facing months of criticism for its role in the crisis, Facebook acknowledged that it had been too slow to respond to inflammatory posts. In 2018, it commissioned an independent assessment of its human rights impact in Myanmar, which found that Facebook hadn’t done enough to prevent users on its platform from inciting violence. The company said it “can and should do more” on that front and agreed “on the value of publishing more data on our enforcement efforts in Myanmar.” And it removed several users with links to the military, including its commander-in-chief.
Now the social media giant is making another effort to prove its remorse: It’s banning four insurgent groups it has classified as “dangerous organizations,” according to a statement issued Tuesday. That could be seen as a welcome sign that the company is committed to being more proactive in Myanmar — the groups it singled out have clashed with the country’s military, displaced thousands of civilians, and refused to sign a ceasefire agreement, Reuters reports. But some local human rights groups and critics are already objecting to Facebook’s choice of these groups, saying it’s arbitrary at best and harmful at worst.
One thing’s for sure: Facebook’s attempt at corrective action is too little, too late for the Rohingya who suffered murder, gang rape, and forced migration partly as a result of the company’s inaction.
Facebook’s role in the Rohingya tragedy has amplified a broader set of worries about the harmful impact social media can have on societies. In particular, the past couple years have seen mounting concern over the way Facebook gives right-wing politicians and authoritarian governments a leg up: by offering them a vast platform where they can demonize a minority group, it enables them to fuel a population’s fear so it’ll rally to them for protection. 
The Rohingya crisis is another example of how easily social media can be coopted by bad actors who want to gin up violence against vulnerable people. And it raises questions about how much responsibility tech companies bear for these ill effects.

What went wrong in Myanmar?

The persecution of Rohingya in Myanmar dates back at least as far as the British colonial era, with bouts of intense violence erupting periodically since then. Buddhist citizens today tend to view Rohingya as foreigners with separatist aspirations who pose a risk to the ethnic and religious identity of the country. Some fear the perceived threat of “Islamization.”
Tensions boiled over in August 2017, when the Myanmar military cracked down on the whole Rohingya population after some Rohingya insurgents attacked government forces. Military personnel as well as nationalistic Buddhist monks and laypeople had long been spreading anti-Rohingya rumors on Facebook, including fake news about the rape of a Buddhist woman by a Muslim man.
The rumors fueled widespread violence. More than 43,000 parents were reported missing and presumed dead, though no one knows the exact death toll, and more than 700,000 Rohingya left Myanmar in the space of a year. Most fled across the border into Bangladesh, where 1 million Rohingya now live in refugee camps.

In 2018, Facebook banned many users in Myanmar from the platform. In an interview with Vox’s Ezra Klein, the social media network’s CEO, Mark Zuckerberg, said his company takes the issues in that country “really seriously.”
This problem isn’t limited to Myanmar: Facebook has been accused of enabling hate speech and violence in other countries, like Sri Lanka and the Philippines. But it’s taken on a special urgency in Myanmar, where military dictatorship and censorship are not-too-distant memories, digital literacy is low, and Facebook is so ubiquitous that people routinely confuse it for the entirety of the internet. Their smartphones come with Facebook preinstalled and it’s often their only news source.
Explaining its decision to ban four groups from the platform on Tuesday, Facebook said there was “clear evidence” that the Arakan Army, the Myanmar National Democratic Alliance Army, Kachin Independence Army, and the Ta’ang National Liberation Army “have been responsible for attacks against civilians and have engaged in violence in Myanmar, and we want to prevent them from using our services to further inflame tensions on the ground.”
But some civil society groups and human rights experts in the region said Thursday that Facebook is going about this the wrong way. The company wasn’t transparent about why it decided to target these four groups and not the many other ethnic armed organizations in the country — or, for that matter, the Myanmar military as a whole, which carried out the campaign of ethnic cleansing against the Rohingya. As the Guardian reports:
There are more than a dozen recognized ethnic armed organizations in Myanmar, many of which have been engaged in armed struggle against the military for ethnic or national self determination for decades. By deeming these four armies as “dangerous organizations,” Facebook was “tipping the scales” toward the military and providing “a big boost for the government,” one human rights observer told the Guardian.
Facebook hasn’t yet responded on the record to these criticisms, though a spokesperson told the Guardian that the company is focusing on nonstate actors.

Can Facebook be held responsible?

Many of Facebook’s critics, as well as Facebook executives themselves, agree that the company needs to do better in Myanmar — for example, by getting more reviewers who speak the local language to assess reports of inflammatory content and remove hateful accounts more quickly in the future.
But what about the violence it’s already enabled? Should it be held legally liable?
Some wonder whether Facebook could be tried for human rights abuses. As Ingrid Burrington wrote in the Atlantic in late 2017, “While advocates and journalists made compelling moral and ethical arguments for Facebook to take action this past fall, legal liability remained conspicuously absent. There’s no agreed-upon point at which a platform’s automated actions at scale rise to a potentially illegal level of complicity in crimes against humanity.”
A number of studies, observing that social media seems to lend itself to hate-mongering, have tried to explain why. One study out of Germany found that posts that evoke strong negative emotions like fear grab the most attention. And since Facebook’s algorithms promote the posts that users most gravitate toward, these negative ones are more likely to be pushed up on the news feed. In another German study, researchers found that, as my colleague Zack Beauchamp wrote in Vox, “all other things being equal, areas with higher rates of Facebook use saw higher rates of hate crimes targeting immigrants.”
Facebook, which just marked its 15th birthday, has spent much of the past couple years trying to rehabilitate an image that’s been marred not just by the Rohingya tragedy, but also by Russian interference in the US elections and privacy scandals like Cambridge Analytica. This week the company has dispatched a top executive to Capitol Hill to soothe House and Senate staffers worried about Facebook’s data-sharing partnerships with other companies, Axios reports.
Meanwhile, Angelina Jolie — in her capacity as a special envoy for the UN — visited the Rohingya camps in Bangladesh this week. The UN is expected to launch an appeal for $920 million in aid to help the refugees. “I urge the Myanmar authorities to show the genuine commitment needed to end the cycle of violence,” Jolie said.
Many have long urged Facebook to do the same thing. Maybe it’s finally trying to listen.
And yet, plenty of Myanmar experts remain deeply skeptical: They’re already worrying about the impact Facebook might have on the country’s elections next year.

Comments

Popular posts from this blog

Defining the Biden Doctrine

George Soros at the Davos Forum