(New York) - Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report released today. The 51-page report, “Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook,” documents a pattern of undue removal and suppression of protected speech including peaceful expression in support of Palestine and public debate about Palestinian human rights. Human Rights Watch found that the problem stems from flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.
“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at Human Rights Watch. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”
Human Rights Watch reviewed 1,050 cases of online censorship from over 60 countries. Though they are not necessarily a representative analysis of censorship, the cases are consistent with years of reporting and advocacy by Palestinian, regional, and international human rights organizations detailing Meta’s censorship of content supporting Palestinians.
After the Hamas-led attack in Israel on October 7, 2023, which killed 1,200 people, mostly civilians, according to Israeli officials, Israeli attacks in Gaza have killed around 20,000 Palestinians, according to the Gaza Ministry of Health. Unlawful Israeli restrictions on humanitarian aid have contributed to an ongoing humanitarian catastrophe for Gaza’s 2.2 million population, nearly half of whom are children.
Human Rights Watch identified six key patterns of censorship, each recurring in at least 100 instances: content removals, suspension or deletion of accounts, inability to engage with content, inability to follow or tag accounts, restrictions on the use of features such as Instagram/Facebook Live, and “shadow banning,” a term denoting a significant decrease in the visibility of an individual’s posts, stories, or account without notification. In over 300 cases, users were unable to appeal content or account removal because the appeal mechanism malfunctioned, leaving them with no effective access to a remedy.
In hundreds of the cases documented, Meta invoked its “Dangerous Organizations and Individuals” (DOI) policy, which fully incorporates the United States designated lists of “terrorist organizations.” Meta has cited these lists and applied them sweepingly to restrict legitimate speech around hostilities between Israel and Palestinian armed groups.
Meta also misapplied its policies on violent and graphic content, violence and incitement, hate speech, and nudity and sexual activity. It has inconsistently applied its “newsworthy allowance” policy, removing dozens of pieces of content documenting Palestinian injury and death that has news value, Human Rights Watch said.
Meta is aware that its enforcement of these policies is flawed. In a 2021 report, Human Rights Watch documented Facebook’s censorship of the discussion of rights issues pertaining to Israel and Palestine and warned that Meta was “silencing many people arbitrarily and without explanation.”
An independent investigation conducted by Business for Social Responsibility and commissioned by Meta found that the company’s content moderation in 2021 “appear[s] to have had an adverse human rights impact on the rights of Palestinian users,” adversely affecting “the ability of Palestinians to share information and insights about their experiences as they occurred.”
In 2022, in response to the investigation’s recommendations as well as guidance by Meta’s Oversight Board, Meta made a commitment to make a series of changes to its policies and their enforcement in content moderation. Almost two years later, though, Meta has not carried out its commitments, and the company has failed to meet its human rights responsibilities, Human Rights Watch found. Meta’s broken promises have replicated and amplified past patterns of abuse.
Human Rights Watch shared its findings with Meta and solicited Meta’s perspective. In response, Meta cited its human rights responsibility and core human rights principles as guiding its “immediate crisis response measures” since October 7.
To meet its human rights due diligence responsibilities, Meta should align its content moderation policies and practices with international human rights standards, ensuring that decisions to take content down are transparent, consistent, and not overly broad or biased.
Meta should permit protected expression, including about human rights abuses and political movements, on its platforms, Human Rights Watch said. It should begin by overhauling its “dangerous organizations and individuals” policy to make it consistent with international human rights standards. Meta should audit its “newsworthy allowance” policy to ensure that it does not remove content that is in the public interest and should ensure its equitable and non-discriminatory application. It should also conduct due diligence on the human rights impact of temporary changes to its recommendation algorithms it introduced in response to the recent hostilities.
“Instead of tired apologies and empty promises, Meta should demonstrate that it is serious about addressing Palestine-related censorship once and for all by taking concrete steps toward transparency and remediation,” Brown said.