Rohingya refugees sue Facebook for $ 150 billion

  • Burma’s Muslim Minority Accuses Social Network Of Allowing Hate Speech Against Its Members

  • More than 10,000 Rohingya were killed by the Burmese Army in a campaign described as “genocide”

Groups of Rohingya refugees they have defendant this monday to Facebook by $ 150 billion arguing that the social network has not stopped hate speech on its platform against this Muslim minority in Burma, thus exacerbating violence against its members. The complaint, filed in a California court, argues that the algorithms of the powerful American company promote misinformation and some extremist ideas that end up translating into violence in the real world.

“Facebook is like a robot programmed with a singular mission: to grow,” says the court document. “The undeniable reality is that Facebook’s growth, fueled by the hate, the division and the disinformation, has left hundreds of thousands of Rohingya lives devastated in their wake. “The Rohingya are essentially Muslim and face open discrimination in Burma, where they are despised as” foreigners “despite having lived in the country for generations.

A campaign backed by the Army, and described by the United Nations as a genocide, expelled hundreds of thousands of them in 2017 to Bangladesh, where they have lived in refugee camps ever since. Near 10,000 more were killed by the military. Of the rest, some remain in Burma, where they have no right to citizenship and where they are subjected to habitual violence and discrimination due to the current Military meeting who controls the country.

Promotion of extremism

The legal complaint argues that Facebook algorithms they lead the most susceptible users to join increasingly extreme groups, a situation that is “open to being exploited by politicians and autocratic regimes.” Human rights groups frequently argue that Facebook is not doing enough to prevent the spread of misinformation.

Its critics further argue that even when the company was alerted to the spread of hate speech on its platform, it did not act. This lack of supervision would have allowed the proliferation of fake news, endangering the lives of certain minorities and undermining democratic foundations in countries like the United States, where unsubstantiated allegations of voter fraud circulated and escalated after the 2020 elections.

Related news

This same year a huge leak by one of the employees of the social network gave weight to the accusation that Facebook, whose mother company is now called Meta, knew that the content on its pages could endanger millions of its users, but its executives prioritized business growth over content screening. To a large extent, US law protects Facebook from content posted by its users on the platform.

Facebook, which has yet to publicly respond to the lawsuit, has come under increasing pressure in the United States and Europe to eliminate fake news, particularly on electoral and coronavirus issues. The company has partnered with various media outlets to verify and remove false content posts.

Reference-www.elperiodico.com

Leave a Comment