Click here to download the Summary & Recommendations in Arabic
Summary
Meta’s policies and practices have been silencing voices in support of Palestine and Palestinian human rights on Instagram and Facebook in a wave of heightened censorship of social media amid the hostilities between Israeli forces and Palestinian armed groups that began on October 7, 2023. This systemic online censorship has risen against the backdrop of unprecedented violence, including an estimated 1,200 people killed in Israel, largely in the Hamas-led attack on October 7, and over 18,000 Palestinians killed as of December 14, largely as a result of intense Israeli bombardment.
Between October and November 2023, Human Rights Watch documented over 1,050 takedowns and other suppression of content Instagram and Facebook that had been posted by Palestinians and their supporters, including about human rights abuses. Human Rights Watch publicly solicited cases of any type of online censorship and of any type of viewpoints related to Israel and Palestine. Of the 1,050 cases reviewed for this report, 1,049 involved peaceful content in support of Palestine that was censored or otherwise unduly suppressed, while one case involved removal of content in support of Israel. The documented cases include content originating from over 60 countries around the world, primarily in English, all of peaceful support of Palestine, expressed in diverse ways. This distribution of cases does not necessarily reflect the overall distribution of censorship. Hundreds of people continued to report censorship after Human Rights Watch completed its analysis for this report, meaning that the total number of cases Human Rights Watch received greatly exceeded 1,050.
Human Rights Watch found that the censorship of content related to Palestine on Instagram and Facebook is systemic and global. Meta’s inconsistent enforcement of its own policies led to the erroneous removal of content about Palestine. While this appears to be the biggest wave of suppression of content about Palestine to date, Meta, the parent company of Facebook and Instagram, has a well-documented record of overbroad crackdowns on content related to Palestine. For years, Meta has apologized for such overreach and promised to address it. In this context, Human Rights Watch found Meta’s behavior fails to meet its human rights due diligence responsibilities. Despite the censorship documented in this report, Meta allows a significant amount of pro-Palestinian expression and denunciations of Israeli government policies. This does not, however, excuse its undue restrictions on peaceful content in support of Palestine and Palestinians, which is contrary to the universal rights to freedom of expression and access to information.
This report builds on and complements years of research, documentation, and advocacy by Palestinian, regional, and international human rights and digital rights organizations, in particular 7amleh, the Arab Center for the Advancement of Social Media, and Access Now.
In reviewing the evidence and context associated with each reported case, Human Rights Watch identified six key patterns of undue censorship, each recurring at least 100 times, including 1) removal of posts, stories, and comments; 2) suspension or permanent disabling of accounts; 3) restrictions on the ability to engage with content—such as liking, commenting, sharing, and reposting on stories—for a specific period, ranging from 24 hours to three months; 4) restrictions on the ability to follow or tag other accounts; 5) restrictions on the use of certain features, such as Instagram/Facebook Live, monetization, and recommendation of accounts to non-followers; and 6) “shadow banning,” the significant decrease in the visibility of an individual’s posts, stories, or account, without notification, due to a reduction in the distribution or reach of content or disabling of searches for accounts.
In addition, dozens of users reported being unable to repost, like, or comment on Human Rights Watch’s post calling for evidence of online censorship, which was flagged as “spam.” Instagram accounts posted about the call for censorship documentation in comments with an email address to send Human Rights Watch evidence. Instagram removed the comments, citing a violation of its Community Guidelines.
Human Rights Watch’s analysis of the cases suggests four underlying, systemic factors that contributed to the censorship:
- Flaws in Meta policies, principally its Dangerous Organizations and Individuals (DOI) policy, which bars organizations or individuals “that proclaim a violent mission or are engaged in violence” from its platforms. Understandably, the policy prohibits incitement to violence. However, it also contains sweeping bans on vague categories of speech, such as “praise” and “support” of “dangerous organizations,” which it relies heavily on the United States government’s designated lists of terrorist organizations to define. The US list includes political movements that have armed wings, such as Hamas and the Popular Front for the Liberation of Palestine. The ways in which Meta enforces this policy effectively bans many posts that endorse major Palestinian political movements and quells the discussion around Israel and Palestine;
- Inconsistent and opaque application of Meta policies, in particular on exceptions for newsworthy content, that is, content that Meta allows to remain visible in the public interest, even if it is otherwise violates their policies;
- Apparent deference to requests by governments for content removals, such as requests by Israel’s Cyber Unit and other countries’ internet referral units to remove content; and
- Heavy reliance on automated tools for content removal to moderate or translate Palestine-related content.
In addition, in over 300 cases documented by Human Rights Watch, users reported and provided evidence of being unable to appeal the restriction on their account to the platform, which left the user unable to report possible platform violations and without any access to an effective remedy.
Meta has long been on notice that its policies have resulted in the silencing of Palestinian voices and their supporters on its platforms. The evidence of censorship documented in this report stems from the same concerns that human and digital rights organizations raised on previous occasions, such as in 2021, when the planned takeovers by Israeli authorities of Palestinian homes in the Sheikh Jarrah neighborhood of occupied East Jerusalem triggered protests and violence along with censorship of pro-Palestine content on Facebook and Instagram. In a 2021 report, Human Rights Watch documented Facebook’s censorship of the discussion of rights issues pertaining to Israel and Palestine and warned that Meta was “silencing many people arbitrarily and without explanation, replicating online some of the same power imbalances and rights abuses that we see on the ground.”
In response to years of digital and human rights organizations calling for an independent review of Meta’s content moderation policies and a 2021 recommendation from Meta’s Oversight Board—an external body created by Meta to appeal content moderation decisions and to provide non-binding policy guidance—Meta commissioned Business for Social Responsibility (BSR), an independent entity, to investigate whether Facebook had applied its content moderation in Arabic and Hebrew, including its use of automation, without bias. In September 2022, BSR published “Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine in May 2021,” which found that Meta’s actions “appear to have had an adverse human rights impact…on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.”
Based on recommendations from the Oversight Board, the BSR report, and engagement with civil society over the years, Meta made several commitments to addressing concerns around Palestine-related censorship. However, Meta’s practices during the hostilities that erupted in October 2023 show that the company has not delivered on the promises it made two years ago. As this report demonstrates, the problem has grown only more acute.
Under the United Nations Guiding Principles on Business and Human Rights (UNGPs), companies have a responsibility to avoid infringing on human rights, identify and address the human rights impacts of their operations, and provide meaningful access to a remedy to those whose rights they abused. For social media companies, including Meta, this responsibility includes aligning their content moderation policies and practices with international human rights standards, ensuring that decisions to take down content are transparent and not overly broad or biased, and enforcing their policies consistently.
Meta should permit protected expression, including about human rights abuses and political movements, on its platforms. It should begin by overhauling its Dangerous Organizations and Individuals policy so that it comports with international human rights standards. Meta should also audit its enforcement of its “newsworthy allowances,” to ensure that these are being applied in an effective, equitable, and non-discriminatory manner.
The company should improve transparency around requests by governments’ internet referral units, including Israel’s Cyber Unit, to remove content “voluntarily”—that is, without a court or administrative order to do so—and about its use of automation and machine learning algorithms to moderate or translate Palestine-related content. It should carry out due diligence on the human rights impact of temporary changes to its recommendation algorithms that it introduced in response to the hostilities between Israel and Hamas since October 7. Meta should also take urgent steps to work with civil society to set targets for the implementation of its outstanding commitments to address overreach in its content suppression of Palestine-related content.
Methodology
Human Rights Watch conducted the research for this report between October and November 2023. In October, Human Rights Watch published a call for evidence of online censorship—used here and in other Human Rights Watch reporting in its colloquial sense of improper limitations on or suppression of free expression—and suppression of content related to Israel and Palestine on social media since October 7, which we posted in English, Arabic, and Hebrew from the main Human Rights Watch accounts on Instagram, X (formerly known as Twitter), and TikTok.[1] Human Rights Watch attempted to solicit information from its entire global audience.
Human Rights Watch requested the following information be sent via email, to an address dedicated to this research, from social media users who reported experiencing censorship: screenshots of the original content, the relevant social media platform, the date and country from which the content was posted, the form of censorship experienced (removal, “shadow ban”, disabling features, inability to engage with content, etc.), the notification from the relevant platform (if any), prior engagement figures (in case of shadow banning), the account URL, appeal status (if any), and any other relevant information. In addition to the cases we received via solicitation, people spontaneously sent us cases and we identified several additional publicly available cases for inclusion.
Human Rights Watch solicited cases of any type of online censorship and of any type of viewpoint related to Israel and Palestine. Of the 1,050 cases reviewed for this report, 1,049 cases documented involved examples of online censorship and suppression of content in support of Palestine, while one case contained an example of removal of content in support of Israel.[2] This distribution of cases does not necessarily reflect the overall distribution of censorship.
Human Rights Watch’s internal data shows that the call for evidence posted on social media reached audiences across the globe, including in Israel. Most of the content that Human Rights Watch received was in English[3] and originated from the following countries and territories: Antigua and Barbuda, Australia, Austria, Bahrain, Bangladesh, Belgium, Bolivia, Bosnia, Brazil, Brunei, Canada, Congo, Croatia, Denmark, Egypt, Finland, France, Germany, Ghana, India, Indonesia, Ireland, Israel, Italy, Jordan, Kenya, Kuwait, Lebanon, Libya, Lithuania, Malaysia, Mexico, Netherlands, New Zealand, Norway, Oman, Pakistan, Palestine, Panama, Peru, Portugal, Puerto Rico, Qatar, Romania, Singapore, South Africa, South Korea, Spain, Sri Lanka, Sweden, Switzerland, Thailand, Trinidad, Tunisia, Türkiye, the United Kingdom, and the United States.
The researchers reviewed all 1,285 reports of online censorship received via email by November 28, 2023, either in response to our solicitation or spontaneously submitted. We excluded cases in which there was insufficient evidence to substantiate the claim of censorship or that or that did not include content about Israel or Palestine. We also screened evidence for any speech that could be considered incitement to violence, discrimination, or hostility by evaluating the content of the post, the context around the post (other comments, media, etc.) and additional information provided by the person who reported the censorship, and notifications from Meta. The researchers used a combination of evidence provided by the user including screenshots and background material in the email and publicly available information to assess whether the claim of unjustified restrictions on their content or account by Meta was substantiated. If the researchers did not have enough information to fully assess the context of the post to confirm that the content was peaceful support for Palestine or Palestinians, we excluded the case.
This analysis identified a data set of 1,050 cases of censorship, i.e. the removal or suppression of protected expression. This dataset understates the volume of censorship that was reported to us, as hundreds of people continued to report instances of censorship after our November 28 cutoff. At time of writing, we have received a total number of 1,736 reports. While these additional cases are not included in this report’s analysis, a review of these reports indicates hundreds of more instances wherein support of Palestine or Palestinians was censored. This distribution of cases does not necessarily reflect the overall distribution of censorship.
Most reports received and evidence documented by Human Rights Watch were about postings on Instagram and Facebook (with fewer overall instances being reported about X, TikTok, and other platforms). Meta platforms (Instagram and Facebook) have had high usage rates, both in response to hostilities in Israel and Palestine since October 7 as well as historically. Facebook and Instagram each have the highest usage rates, with over 3 billion and over 2.3 billion monthly active users respectively, compared to other platforms such as X (close to 400 million), Telegram (over 800 million), and TikTok (over 1 billion), as of 2023.[4]
This report is an analysis of the verified cases of content removal we received or documented and is not a global comparative analysis of overall censorship of political statements and viewpoints. The trends identified in these cases are not intended to reflect the general distribution of censorship across social media platforms. The findings of this report, namely that Meta’s censorship primarily suppressed protected expression in support of Palestine or Palestinians on Instagram and Facebook, pertain to trends only within these 1,050 cases.
The researchers anonymized all the information social media users shared with Human Rights Watch, and assured the people who reported their experiences that none of their information would be shared or published without their explicit and informed consent. None of the participants in the research received any compensation.
Human Rights Watch wrote to Meta on November 15, 2023, to share the findings of our research and to solicit Meta’s perspective. Meta’s response is reflected in this report. Human Rights Watch also wrote to the Israeli Cyber Unit at Israel’s Office of the State Attorney on November 22, to request information on the unit’s requests to social media companies since October 7 and justification for making such requests. At time of writing, the Cyber Unit had not responded. All letters and responses are included in full in this report’s annex.
The research included consultations with several human rights and digital rights organizations including 7amleh, Access Now, and Amnesty International.
I. Background
Political Context: The Broader Environment for Censorship
Palestinians today are facing unprecedented levels of violence and repression. The Israeli military’s current operations in Gaza began following an unprecedented Hamas-led attack on Israel on October 7, 2023, in which an estimated 1,200 people were killed and more than 200 people were taken hostage, according to Israeli authorities.[5][6] As of December 14, 2023, around 18,700 Palestinians had been killed in Gaza, including around 7,700 children, according to authorities in Gaza.[7] Israel cut off essential services to Gaza and prevented the entry of all but a trickle of aid.[8]
The extreme violence and dire humanitarian situation have made it harder to seek and impart information. According to preliminary investigations by the Committee to Protect Journalists (CPJ), as of December 17, 2023, at least 64 journalists and media workers were confirmed dead: 57 Palestinian, 4 Israeli, and 3 Lebanese.[9] CPJ said that the first month of the hostilities in Israel and Gaza marked “the deadliest month for journalists” since they began documenting journalist fatalities in 1992.[10] Additionally, repeated and prolonged communications blackouts in Gaza have impeded access to reliable and lifesaving information, as well as the ability to document and share evidence of human rights abuses.[11]
Against this backdrop, the broader environment for free expression about Palestine is under increasing pressure. While the focus of this report is censorship of social media content, online censorship does not exist in a vacuum. On November 23, 2023, United Nations experts issued a statement expressing alarm at a worldwide wave of attacks, reprisals, criminalization, and sanctions against those who publicly express solidarity with the victims of the hostilities between Israeli forces and Palestinian armed groups.[12] The experts noted that artists, academics, journalists, activists, and athletes have faced particularly harsh consequences and reprisals from states and private actors because of their prominent roles and visibility.[13] Protecting free expression on issues related to Israel and Palestine is especially important considering the shrinking space for discussion.[14]
On November 8, 2023 the Israeli Knesset passed an amendment to the country’s Counter-Terrorism Law of 2016 that makes the “consumption of terrorist materials” a criminal offense. Adalah, an Israeli human rights organization, criticized the law as “[o]ne of the most intrusive and draconian legislative measures ever passed by the Israeli Knesset,” saying it “invades the realm of personal thoughts and beliefs and significantly amplifies state surveillance of social media use.”[15] Even before the amendment’s passage, it was reported on October 17, 2023 that Israel’s police stated at least 170 Palestinians had been arrested or brought in for questioning since the Hamas attack on the basis of online expression.[16] Adalah has documented 251 instances between October 7, 2023 and November 13, 2023 where people had been issued a warning, interrogated, or, in at least 132 cases, arrested for activity that [as it classifies] largely falls within the right to freedom of expression. The organization described this as “widespread and coordinated” efforts to repress expression of dissent against the Israeli government’s attack on Gaza.[17] The Israeli government’s systematic oppression of millions of Palestinians, coupled with inhumane acts committed as part of a policy to maintain the domination by Jewish Israelis over Palestinians, amount to the crimes against humanity of apartheid and persecution.[18]
Palestinian authorities in the West Bank and Gaza have also clamped down on free expression,[19] while in several other countries, including the United States and across Europe, governments and private entities have taken steps to restrict the space for some forms of advocacy in support of Palestine.
Since October 7, 2023, artists, cultural workers, and academics in various countries have faced significant consequences in the form of silencing, censorship, and intimidation by some governments and private institutions as a result of non-violent, pro-Palestinian speech. [20] These include undue pressure or restrictions on academic freedom,[21] and Palestinian experts being disinvited from media interviews and conferences.[22] There have also been restrictions on peaceful protests in support of Palestine.[23] The punishing tactics against those expressing solidarity with Palestinians or criticizing Israeli war crimes in Gaza pose serious challenges to freedom of expression in a time of crisis and polarization over events on and since October 7.
Palestine Legal, an organization that protects the civil and constitutional rights of people in the United States who speak out for Palestinian freedom, received over 600 requests for support between October 7, 2023 and November 15, 2023.[24] The organization reported “seeing an unprecedented wave of workplace discrimination,” having received over 280 allegations involving employment concerns, over 60 of which involved people who said they had already been terminated from their jobs.[25]
In addition, authorities in various European countries have at times imposed excessive restrictions on pro-Palestine protest and speech since October 7, 2023.[26] French authorities placed a blanket ban on pro-Palestinian protests, a move overturned by the Council of State, France’s highest administrative court, on October 18, 2023.[27] Before the decision, French authorities had banned 64 protests about Palestine, media reported.[28]
Since October, authorities in Germany have banned some pro-Palestinian protests while allowing others to take place,[29] prompting concern from the country’s antisemitism commissioner, who noted that “demonstrating is a basic right.”[30] On October 13, 2023, education authorities in Berlin gave schools permission to ban students from wearing the Palestinian keffiyeh (checkered black and white scarf) and displaying “free Palestine” stickers, raising concerns about the right to free expression and possible discrimination.[31]
Bans on pro-Palestinian protests have been reported in Austria,[32] Hungary,[33] and Switzerland.[34]
In the United Kingdom, police in London have generally taken a nuanced approach to pro-Palestinian protest since October 7, including in relation to the use of slogans in protests that have been cited elsewhere in Europe to justify bans.[35] This is despite political pressure on the police by the then- UK home secretary, who called for use of “the full force of the law” in the context of pro-Palestinian protests [36] and a statement by the previous UK foreign minister, who in October called on pro-Palestinian supporters to “stay at home.”[37] The then-UK immigration minister stated that visitors to the country will be “removed” if they “incite antisemitism,” even if their conduct falls “below the criminal standard.”[38]
Meta’s Broken Promises
Meta has long been aware that its policies have resulted in the silencing of Palestinian voices and their supporters on its platforms. For years, digital rights and human rights organizations from the region, in particular 7amleh, have been documenting and calling Meta’s attention to the disproportionately negative impact of its content moderation on Palestinians. [39] Human Rights Watch and others have pointed the company to underlying structural problems, such as flaws in its Dangerous Organizations and Individuals (DOI) policy,[40] inconsistent and opaque enforcement of its rules, influence by various governments over voluntary content removals, and heavy reliance on automation for content removal, moderation, and suppression.[41] When violence escalates in the region and people turn to social media to document, discuss, raise awareness around and condemn human rights abuses, and engage in political debate, the volume of content skyrockets, as do the levels and severity of censorship.
The events of May 2021 are emblematic of this dynamic. When plans by Israeli authorities to take over Palestinian homes in the Sheikh Jarrah neighborhood of occupied East Jerusalem triggered protests and escalation in violence in parts of Israel and the Occupied Palestinian Territories (OPT),[42] people experienced heavy handed censorship when they used social media to speak out. On May 7, 2021, a group of 30 human rights and digital rights organizations denounced social media companies for “systematically silencing users protesting and documenting the evictions of Palestinian families from their homes in the neighborhood of Sheikh Jarrah in Jerusalem.”[43] In October 2021, Human Rights Watch published a report that documented Facebook’s censorship of the discussion of rights issues pertaining to Israel and Palestine and warned that Meta was “silencing many people arbitrarily and without explanation, replicating online some of the same power imbalances and rights abuses that we see on the ground.”[44]
At the time, Facebook acknowledged several issues affecting Palestinians and their content, as well as those speaking about Palestinian matters globally,[45] some of which it attributed to “technical glitches”[46] and human error.[47] However, these issues did not explain the range of restrictions and suppression of content that Human Rights Watch observed. In a letter to Human Rights Watch, Facebook said it had already apologized for “the impact these actions have had on [Meta’s] community in Palestine and on those speaking about Palestinian matters globally.”[48]
In response to years of calls by digital and human rights organizations[49] and a recommendation from Meta’s Oversight Board[50] for an independent review of the company’s content moderation policies, Meta commissioned an independent investigation to determine whether Facebook’s content moderation in Arabic and Hebrew, including its use of automation, had been applied without bias. In September 2022, Business for Social Responsibility (BSR), a “sustainable business network and consultancy”, published its findings in its report “Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine in May 2021.”[51] Among other findings, the report concluded that Meta’s actions “appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.” The BSR report identified various instances where Meta policy and practice, combined with broader external dynamics, led to different human rights impacts on Palestinian users as well as other Arabic-speaking users.[52]
In response to recommendations from the Oversight Board, the BSR report, and engagement with civil society over the years, Meta made several commitments to addressing concerns around Palestine-related censorship. However, the evidence of censorship and suppression of content about, and in support of, Palestinians and their supporters that Human Rights Watch documents in this report stems from the same underlying concerns that surfaced in 2021 and earlier, and shows that Meta has not delivered on the promises it made two years ago.
For example, Meta committed to improving its DOI policy, its rule on content on terrorist and other “dangerous entities,” which is at the center of many of the content takedowns and account restrictions concerning Palestine. The DOI policy prohibits “organizations or individuals that proclaim a violent mission or are engaged in violence [from having] a presence on Meta.”[53]
The DOI policy also bans “praise” and “substantive support” of groups or individuals from Meta’s platforms. These are vague and broad terms that can include expression that is protected under international human rights law. In its scope and application, the DOI policy effectively bans the endorsement of many major Palestinian political movements and quells the discussion of current hostilities.[54] Meta had, in 2022, agreed to amend the policy to allow people to engage in social and political discourse more freely, consider removing its prohibition on “praise” of DOI entities, and make the penalties incurred under the DOI policy proportional to the violation. Meta says that it has completed its rollout of the social and political discourse carveout[55] and aims to launch changes to its “praise” and its penalty system in the first half of 2024,[56] which suggests that “praise” will continue to be part of the policy.
The cases Human Rights Watch documents in this report indicate that Meta’s erroneous removal of pro-Palestinian views has resulted in censorship of social and political discourse and content documenting or reacting to hostilities in Israel and the occupied Palestinian territories (OPT), including Gaza, sharing on-the-ground developments, or expressing solidarity with Palestinians.
Meta committed in August[57] and October 2021[58] to increasing transparency around government requests for content removals under its Community Standards, such as those from Israel’s Cyber Unit, as well as internet referral units (IRUs) in other countries. IRU requests are prone to abuse because they risk circumventing legal procedures, lack transparency and accountability, and fail to provide users with access to an effective remedy. According to media reports on November 14, Israel’s Cyber Unit sent Meta and other platforms 9,500 content takedown requests since October 7, 2023, 60 percent of which went to Meta.[59] Platforms are reported to have responded with a 94-percent compliance rate, according to an Israeli official. Two years after its commitment to increasing transparency, Meta has made no meaningful progress in informing its users or other members of the public how government requests are influencing what content is removed from Instagram and Facebook.
Meta also committed, in 2021, to provide greater transparency to users around its enforcement actions, including limiting certain features and reducing the visibility of accounts during user online searches, and communicating enforcement actions clearly. Yet, a largely recurrent complaint Human Rights Watch received in researching this report was that users lost account features without warning.
As this report demonstrates, Meta’s broken promises have led it to not only replicate past patterns of abuses, but also to amplify them. Censoring the voices and narratives of Palestinians and those voicing solidarity with them does not just impact those whose posts and accounts are restricted. It reduces the information to which the rest of the world has access regarding developments in Israel and Palestine at a time when the United Nations Secretary-General and UN human rights experts are warning with increasing urgency that Palestinians in Gaza are facing a humanitarian catastrophe.[60] Meta’s failure to take decisive action in response to recommendations of its own Oversight Board, notwithstanding years of engagement with civil society, means that the company has failed to meet its human rights responsibilities.
II. Main Findings
Since October 7, Human Rights Watch has documented over 1,000 cases of unjustified takedowns and other suppression of content on Instagram and Facebook related to Palestine and Palestinians, including about human rights abuses. These cases detail various forms of censorship of posts and accounts documenting, condemning, and raising awareness about the unprecedented and ongoing hostilities in Israel and Palestine. The censorship of content related to Palestine on Instagram and Facebook is systemic, global, and a product of the company’s failure to meet its human rights due diligence responsibilities.
The documented cases include content originating from over 60 countries around the world, primarily in English,[61] which carried a diversity of messages while sharing a singular characteristic: peaceful expression in support of Palestine or Palestinians.
In reviewing the evidence and context associated with each reported case, Human Rights Watch identified key patterns of censorship, each recurring in at least a hundred instances, including: 1) removal of posts, stories, and comments; 2) suspension or permanent disabling of accounts; 3) restrictions on the ability to engage with content—such as liking, commenting, sharing, and reposting on stories —for a specific period, ranging from 24 hours to three months; 4) restrictions on the ability to follow or tag other accounts; 5) restrictions on the use of certain features, such as Instagram/Facebook Live, monetization, and the recommendation of accounts to non-followers; and 6) “shadow banning,” defined as the significant decrease in the visibility of an individual’s posts, stories, or account, without notification, due to a reduction in the distribution or reach of content or disabling of searches for accounts.
Some users reported multiple forms of restrictions occurring simultaneously on their account. For example, some users who had their comments removed for violating Meta’s spam policy—which prohibits content that is designed to deceive, or that attempts to mislead users, to increase viewership[62]—and were then unable to comment on any posts. In some cases, these users also reported their suspicion of being “shadow banned,” based on perceived lower views of and engagement with their content. Some users provided evidence that Meta failed to specify which of their policies was violated.[63]
Throughout the research period for this report, Human Rights Watch received cases on a rolling basis, and the same users sometimes reported subsequent platform restrictions, indicating a gradual escalation in the type of restriction imposed on their content or account. For example, repeated comment removals were followed by restrictions in accessing features such as “live streaming,” and a warning that the account could be suspended or permanently disabled. The more “strikes”[64] the user collected, the faster the next restriction on their content or account would become. One user described the pattern:
I noticed a lot of my comments on Instagram were automatically removed as being “spam.” At first the process of being marked as spam seemed to happen a few hours after I made the comments, and the next day it was nearly instantaneous. Then I could no longer “like” news posts about Palestine—I would try more than a dozen times and it would never work. I could “like” other stories posted by this same user. Eventually, I could not even respond to comments made on my own posts.[65]
In addition, most people who reported cases to Human Rights Watch said it was their first time experiencing restrictions on Meta’s platforms since they joined years earlier. In every case, the censorship was strictly related to pro-Palestinian content since October 7. Some users reported examples of abusive content that incited violence or constituted hate speech against Palestinians remaining online while seemingly peaceful content advocating for Palestinian human rights was removed, at times on the same post. For example, to express outrage about abusive comments she experienced on Instagram, a user posted an Instagram “story”[66]—with a screenshot of a message addressed to her that said, “I wish Hamas will catch you, rape you slowly for hours and then kill you, while sending a video of this to your parents, just like they did to us,” as well as her response, “If I knocked your glasses off right now you wouldn’t even be able to see.” The story was flagged and removed under Instagram’s Guidelines on “violence or dangerous organizations.”[67]
Over time, users who reported cases to Human Rights Watch said this led them to change their online behavior or engagement to adapt to and circumvent restrictions, effectively self-censoring to avoid accruing penalties imposed by the platform. Users described this as contributing to resentment at what they perceived as injustice or bias by the company. One person said they did not appeal the takedown to Meta because, “I do not want to put myself [on] their [Meta’s] radar.”[68] Instagram users also employ coded language, such as deliberate misspellings and symbols, in part to try to evade platform censorship resulting from automated moderation of content related to Palestine.
In many instances, users said they did not receive a warning or notification that their account was suspended or disabled or that Meta had barred their use of certain features. In cases of suspected “shadow banning,” users said they were never informed by the platform that their content visibility was diminished. While some claims of shadow banning were supported with compelling evidence that their account was “shadow banned,”[69] many users concluded that they had been “shadow banned” based on a “hunch” or after noticing sudden changes in the number of views on their stories.
On October 18, 2023, Meta said that it fixed a “bug” that had significantly reduced reach on Stories that re-shared Reels and Feed posts on Instagram.[70] Yet, users continued to report and document shadow banning cases after that date. Due to Meta’s lack of transparency around the issue of shadow banning, the parameters of the restriction remain unclear, and because users are not informed of any action taken on their account or content, the user is left without a remedy.[71]
In cases where removal or restrictions on content and accounts were accompanied by a notice to the user, Meta’s most widely cited reasons were Community Guidelines (Instagram) or Standards (Facebook) violations, specifically those relating to “Dangerous Organizations and Individuals (DOI),[72] “adult nudity and sexual activity,” “violent and graphic content,” and “spam.”[73] Among those violations, the most recurring policy invoked by Instagram and Facebook in the cases documented by Human Rights Watch was the “spam” policy. In reviewing these cases, Human Rights Watch found repeated instances of likely erroneous application of the “spam” policy that resulted in the censorship of Palestine-related content.
Human Rights Watch also found repeated inaccurate application of the “adult nudity and sexual activity” policy for content related to Palestine. In every one of the cases, we reviewed where this policy was invoked, the content included images of dead Palestinians over ruins in Gaza that were clothed, not naked. For example, multiple users reported their Instagram stories being removed under this policy when they posted the same image of a Palestinian father in Gaza who was killed while he was holding his clothed daughter, who was also killed.
While “hate speech,” “bullying and harassment,” and “violence and incitement” policies[74] were less commonly invoked in the cases Human Rights Watch documented, the handful of cases where they were applied stood out as erroneous. For example, a Facebook user post that said, “How can anyone justify supporting the killing of babies and innocent civilians…” was removed under Community Standards on “bullying and harassment.”[75] Another user posted an image on Instagram of a dead child in a hospital in Gaza with the comment, “Israel bombs the Baptist Hospital in Gaza City killing over 500…” which was removed under Community Guidelines on “violence and incitement.”[76]
In over 300 cases documented by Human Rights Watch, the user reported and provided evidence of being unable to appeal the restriction on their account to the platform (Instagram or Facebook), indicating that the “Tell Us” button either did not work or did not lead anywhere when clicked, and the “Think that we’ve made a mistake?” option was disabled or unavailable. This left the user unable to report possible platform violations and without any access to an effective remedy.
Illustrative Examples
“From the River to the Sea”
The slogan “From the river to the sea, Palestine will be free” has reverberated at protests in solidarity with Palestinians around the world. In hundreds of cases documented by Human Rights Watch, this slogan, as well as comments such as “Free Palestine,” “Ceasefire Now,” and “Stop the Genocide,” were repeatedly removed by Instagram and Facebook under “spam” Community Guidelines or Standards without appearing to take into account the context of these comments. These statements and the context in which they are used are clearly not spam nor appear to violate any other Facebook or Instagram Community Guidelines or Standards. For instance, the words in each of these statements on their face do not constitute incitement to violence, discrimination, or hostility. Meta has not offered a specific explanation as to why the context in which these statements appear would justify removal. In dozens of cases, the content removal was accompanied by platform restrictions on users’ ability to engage with any other content on Instagram and Facebook, at times for prolonged periods.
Palestinian Flag Emoji
The Palestinian flag symbol, used frequently around the world to express solidarity with Palestine, has been subject to censorship on Instagram and Facebook. In one case, an Instagram user received a warning that the comment she posted “may be hurtful to others.” The comment, which Human Rights Watch reviewed, consisted of nothing more than a series of Palestinian flag emojis.[77] In other cases, Meta hid the Palestinian flag from comment sections or removed it on the basis that it “harasses, targets, or shames others.”[78] In October, Instagram apologized for adding “terrorist” to some Palestinian user public profiles who used the Arabic word “alhamdulillah” (“praise be to God”) and the Palestinian flag emoji. Meta said the censorship was caused by a bug.[79] The issue arose when Instagram’s auto translation feature translated bios that had the word “Palestinian” in Arabic, the Palestinian flag emoji, and the word “alhamdulillah” alongside one another as “Palestinian terrorists.”[80]
Meta spokesperson Andy Stone confirmed to the US online media outlet The Intercept that the company has been hiding comments that contain the Palestinian flag emoji in certain “offensive” contexts that violate the company’s rules. He added that Meta has not created any new policies specific to flag emojis. Asked about the contexts in which Meta hides the Palestinian flag, Stone pointed to the DOI policy, which designates Hamas as a terrorist organization, and cited a section of the Community Standards rulebook that prohibits any content “praising, celebrating or mocking anyone’s death.” The Palestinian flag pre-dates the existence of Hamas, which has its own distinct flag. Stone stated that Meta does not have a different standard to enforce rules with respect to the Palestinian flag emoji.[81]
Mention of “Hamas” Censored
Human Rights Watch documented hundreds of cases where the mere neutral mention of Hamas on Instagram and Facebook triggered the DOI policy, prompting the platforms to immediately remove posts, stories, comments, and videos, and restrict accounts that posted them. While the DOI policy[82] permits reporting on, neutrally discussing, or condemning designated organizations or individuals, it also states that “[if] a user’s intention is ambiguous or unclear, we default to removing content.” All cases that Human Rights Watch reviewed found that Meta removed even neutral mentions of Hamas in relation to developments in Gaza.
Suspension and Removal of Prominent Palestinian Accounts
Instagram and Facebook have in several instances since October 7 suspended or permanently disabled the accounts of prominent Palestinian content creators, independent Palestinian journalists, and Palestinian activists. Palestinian journalist Ahmed Shihab-Eldin reported on November 18, 2023 that he lost access five times to his Instagram account, which has nearly one million followers, since October 7. Shihab-Eldin posts frequently about Palestine.[83] He said that he was not able to access the tool that allows him to see potential account violations, and that other users, when trying to tag him in a post, received a warning message that his account had repeatedly posted false information or contravened Community Guidelines.[84]
Other accounts, including the Instagram account of Let’s Talk Palestine, which posts educational content about Palestine, reported being temporarily suspended.[85] Meta said, “These accounts were initially locked for security reasons after signs of compromise, and we’re working to make contact with the account owners to make sure they have access.” The Palestine-based Quds News Network reported that its Facebook page was permanently deleted[86] and that its Instagram account was suspended.[87] Mondoweiss correspondent Leila Warah, who is based in the West Bank, reported in October that Instagram suspended her account. After Mondoweiss publicized the suspension, her account was quickly reinstated, then soon after suspended again and reinstated the following day.[88]
Criticism of Israel as “Hate Speech” and “Dangerous”
Many users reported posts on Instagram being removed when they criticized the Israeli government, including the leadership of Prime Minister Benjamin Netanyahu, no matter how nuanced or careful their posts were. Meta removed these posts under its “Dangerous Organizations or Individuals” and hate speech rules, respectively.
In addition, multiple accounts sharing educational material about Hamas and background information on Palestinian human rights were removed under Meta’s DOI policy.[89] Human Rights Watch reviews found that these posts did not praise or support Hamas but instead were aimed at giving people context and information to understand the escalation in violence.
Human Rights Watch’s Call for Censorship Evidence
Dozens of users reported being unable to repost, like, or comment on Human Rights Watch’s post calling for evidence of online censorship, which was marked as “spam” and in some cases, flagged under DOI. For example, an account posted about Human Rights Watch’s call for censorship documentation and included an email address to send us evidence. Instagram removed the comment, citing a violation of its Community Guidelines.
“Shadow Banned”
While “shadow banning,” a type of restriction reported by several hundred users, is challenging to verify, partly due to the lack of platform notice of its occurrence, some users demonstrated compelling evidence to support their claim.[90] This included “before” and “after” screenshots noting the dramatic decrease in number of views after the user started posting content about Palestine, screenshots of engagement metrics, such as likes, comments, and shares, noting a sudden and significant decrease in engagement on content related to Palestine, screenshots that the account or content does not appear in search results, a significant slowdown in new followers, and demonstrating that the content is not visible to others.
Harmful Content that Remained Online
While content that remained online is outside the scope of our research, many users recorded evidence of anti-Palestinian and Islamophobic content that remained online even after they reported it to Instagram and Facebook, in the same post where the users’ initial comment was removed. For example, a user reported a comment on their post which said, “Make Gaza a parking lot.”[91] After the complaint was reviewed by Instagram, the platform notified the user that the comment was not removed because it “did not violate Community Guidelines.” Another user reported a comment that said, “I wish Israel success in this war in which it is right, I hope it will wipe Palestine off the face of the earth and the map.”[92] Instagram found that this post did not violate its Community Guidelines. Another comment, which remained online after being reported, stated, “Imagine an Islamic extremist terrorist accusing us of fascism…lol. Fuck Islam and fuck you. You and your people have done enough to make the world a shittier place for decades.”[93]
Underlying Systemic Contributors to Meta’s Censorship
A “Dangerous” Policy for Public Debate
Human Rights Watch documented hundreds of cases where Meta applied the DOI policy with the effect of suppressing peaceful speech on issues related to hostilities between Israeli forces and Palestinian armed groups.[94] Because the peaceful content was erroneously restricted, the standard Meta responses were inevitably disproportionate.
Human rights and digital rights organizations have repeatedly highlighted the role the DOI policy plays in silencing Palestinian voices.[95] The UN special rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism expressed concern[96] that the policy is inconsistent with international human rights law, including the rights to free expression, association, participation in political affairs, and non-discrimination.[97] Similarly, the UN special rapporteur on freedom of opinion and expression warned that “Company prohibitions of threatening or promoting terrorism, supporting or praising leaders of dangerous organizations and content that promotes terrorist acts or incites violence are, like counter-terrorism legislation, excessively vague.”[98] Even Meta’s own Oversight Board has recommended that the company make changes to the policy to avoid censoring protected speech.[99]
The problems with the DOI policy are multilayered, and include how the list is composed, what is covered in the policy, and its enforcement. As noted earlier, because Meta’s designation of individuals and entities under the DOI policy relies heavily on US terrorist lists, including its “foreign terrorist organizations” list,[100] it includes political movements that also have armed wings, such as Hamas and the Popular Front for the Liberation of Palestine.[101] It does this even though, as far as is publicly known, US law does not prohibit groups on the list from using free and freely available social media platforms, and does not consider allowing groups on the list to use platforms tantamount to “providing material support” in violation of US law.[102] Meta’s adoption of broad and sweeping US designations not only effectively prohibits even peaceful expression of support for many major Palestinian political movements, but prohibits many more Palestinians, including civil servants who work for the local government in Gaza, which Hamas dominates, from using its platforms.
The BSR report found that Palestinians are more likely to violate Meta’s DOI policy because of Hamas’ presence as a governing entity in Gaza and political candidates’ affiliations with designated organizations.[103]
Civil society and the Oversight Board recommended that Meta make public the list of organizations and entities it has designated as dangerous, but Meta has refused to do so, citing employee safety and a concern that doing so would permit banned entities to circumvent the policy. The Intercept published a leaked version of the list in October 2021.[104]
The DOI policy not only prohibits “representation,” or creating accounts on behalf of designated groups or individuals, but also bans both “praise” and “substantive support,” vague and broad terms that include protected expression under international human rights law. For example, Meta defines “praise” as including “speak[ing] positively about a designated entity,” giving them “a sense of achievement,” legitimizing their cause “by making claims that their hateful, violent, or criminal conduct is legally, morally, or otherwise justified or acceptable,” or aligning oneself ideologically with them. Meta defines “substantive support” as directly quoting a designated entity without caption that condemns, neutrally discusses, or is a part of news reporting. The policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse” and allows for content that reports on, neutrally discusses, or condemns organizations and individuals on the DOI list.
However, if a user’s intention is ambiguous or unclear, Meta defaults to removing content. Internal guidance makes this default intent presumption even more problematic by shifting the focus to how content might be perceived rather than on what the user intends. It instructs reviewers to remove content for praise if it “makes people think more positively about” a designated group, making the meaning of “praise” less about the intent of the speaker but the effects on the audience.[105] The Oversight Board has also criticized Meta’s tendency to adjust this policy in secret and make exceptions on an ad hoc basis, for example to discuss conditions of an incarcerated person on the DOI list or to enable people to speak favorably about a positive development from a listed organization acting in a governing capacity.[106] The broad categories of speech covered in the DOI policy combined with the default to removing content if the intent is unclear result in the over-removal of content that should be considered protected speech, even where contextual cues make clear the post is, in fact, reporting.
Violating the DOI policy results in severe penalties on accounts, such as immediate loss of features like live-streaming for 30 days and the ability to have content viewed or recommended to non-followers.[107] Other policy violations would only result in a strike against an account, whereas the penalties for violating the DOI policy are swift and severe.[108] The BSR report noted, “DOI violations also come with particularly steep penalties, which means Palestinians are more likely to face steeper consequences for both correct and incorrect enforcement of policy. In contrast to Israelis and others, Palestinians are prevented from sharing types of political content because the Meta DOI policy has no exemption for the praise of designated entities in their governing capacity.”[109]
Human Rights Watch documented the DOI policy being invoked to censor “social and political discourse” around the hostilities, including reporting, neutral discussion, and condemnation of “dangerous” organizations—the type of content the revised DOI policy purports to permit.[110] In one instance, Instagram removed reposting of content from an Arabic language account from the Turkish news broadcast (TRT Arabi) that included a statement in Arabic from the Ministry of Health in Gaza that Israeli forces had ordered the Rantisi hospital for children to evacuate before they bombed it. The post was removed for violating Meta’s DOI policy, presumably because the Ministry of Health in Gaza is part of a government led by Hamas. Human Rights Watch reviewed more than 100 screenshots documenting removal of Instagram content reposting videos, including from news organizations such as Middle East Eye, Vice News, and Al Jazeera that included reporting on videos of hostages published by Hamas and Islamic Jihad on the basis of DOI policy violations.
The practice by Hamas and Islamic Jihad of publicly releasing videos of hostages constitutes an outrage upon personal dignity, a serious violation of the laws of war.[111] However, Meta adjusted its policy in the weeks following the October 7 attacks to allow hostage imagery when it condemns the act, or which includes information for awareness-raising purposes. The same exceptions apply to any Hamas-produced footage.[112] Prohibiting people from sharing the same videos that news outlets shared without adding language that could reasonably be construed as incitement to violence or hatred hinders the public’s ability to engage on issues relating to the crisis. Meta told Human Rights Watch in December that its “teams are considering context around [hostage] imagery, and newsworthy allowances are available where appropriate to balance the public interest against the risk of harm.”[113]
Inconsistent and Opaque Prohibitions on Newsworthy Content
Meta platforms host images, videos, and posts from news outlets, independent journalists, and other sources from conflict zones. At times, this media may include violent and graphic content, hate speech, or nudity. Although Meta policy prohibits violent and graphic content,[114] hate speech,[115] violence and incitement,[116] and nudity and sexual activity,[117] the company makes an exception[118] if it deems the content to be newsworthy and in the service of public interest.
Meta uses a post depicting violence in Ukraine as an illustrative example of the importance of the newsworthiness allowance, [119] demonstrating the company’s willingness to adjust its policies to account for the realities of conflict in another high-profile conflict. When properly enforced, Meta’s newsworthiness allowance has the capacity to bolster discourse, raise public awareness, and facilitate research, including that done by human rights investigators.[120]
However, Human Rights Watch’s investigation found that Meta has inconsistently enforced its newsworthiness allowance policy and has misapplied its prohibitions on incitement and nudity to newsworthy content that does not appear to violate those policies. More specifically, the research shows Meta platforms have repeatedly removed media of graphic content from Palestine, effectively censoring such images.[121] This media includes photos of injured and murdered Palestinians, a video of Israelis urinating on Palestinians, and a Palestinian child shouting “Where are the Arabs?” after his sister was killed.[122] In these cases, content was removed for violating Meta’s policy on violence and incitement. In these cases, the news value of the shared material was such that it is hard to justify a decision to block this content on the basis of a policy on violence and incitement.
Five Instagram users and one Facebook user reported that images of injured and dead bodies in Gaza’s hospitals were removed for violating the platform’s Community Guidelines regarding violence and incitement. Meta’s violence and incitement guidelines prohibit “language that incites or facilitates serious violence” with the stated intention of preventing offline harm. The six images removed made no call for violence.[123]
Additionally, multiple users reported that Instagram removed content depicting the plight of Palestinians, ostensibly for violating its nudity or sexual activity policy. This content includes images of killed Palestinians, a video that appears to show IDF soldiers humiliating and torturing Palestinians, and an image of bombings on Gaza. Three users reported that an image of a fully clothed man holding a girl, both deceased, was removed for violating the platform’s policy on nudity or sexual activity. Instagram removed this image even though it did not include any nudity or sexual activity and likely met the newsworthiness allowance in Meta’s own guidelines.
Meta’s failure to apply the newsworthiness allowance to this content not only functions to censor images of abuse of Palestinians, but also suggests that Meta does not consider such images to serve the public interest.
Some users who reported cases to Human Rights Watch explained that their posts sought to speak out against violence, not incite it. By stripping the content of context and bluntly applying its policies, Meta is effectively censoring newsworthy content and achieving the opposite outcome of the stated intention of its policies.
Where Meta highlighted the importance of the newsworthiness allowance as it applied to Ukraine-related content that it might otherwise prohibit, it appears to have failed to extend the same policy to content documenting the impact of the current hostilities on Palestinians. Far from recognizing the heightened need for latitude in applying its content prohibitions in discussions of ongoing hostilities, the examples shared with Human Rights Watch suggest Meta is applying community guidelines aggressively to content that should not be prohibited in the first place. The suppression by Meta platforms of content documenting Palestinian injury and death can result in offline harm, as gaps in information impact public understanding and resulting political responses.
Lack of Transparency Around Government Requests
Meta removes content based on its Community Standards[124] and to comply with local laws.[125] The company regularly reports on both types of content restrictions.[126]
However, Meta takes down a significant amount of content in response to requests by governments for “voluntary” takedown based on alleged violations of the company’s Community Standards. Such requests come from internet referral units (IRUs),[127] which vary by country but are generally non-judicial bodies, like law enforcement authorities or administrative bodies.
IRU requests typically risk circumventing legal procedures, lack transparency and accountability, and fail to provide users with access to effective remedy. They deny people the due process rights they would have if the government sought to restrict the content through legal processes. Unlike content takedowns based on local law, which should be based on legal orders and result in geolocated restrictions on content, takedowns based on Meta’s Community Standards result in removal of that content globally. Furthermore, the user is not notified that the removal of their content is due to a government request, nor is the role of the government reflected in Meta’s biannual transparency reports.
The Israeli government has been aggressive in seeking to remove content from social media. The Israeli Cyber Unit, based within the State Attorney’s Office, flags and submits requests to social media companies to “voluntarily” remove content.[128] Instead of going through the legal process of filing a court order based on Israeli criminal law to take down online content, the Cyber Unit makes appeals directly to platforms based on their own terms of service. Since Israel’s State Attorney’s Office began reporting on the Cyber Unit's activities, platforms’ overall compliance rate with its requests has never dropped below 77 percent and in 2018 was reported to be as high as 92 percent.[129]
Requests from the Cyber Unit to Meta platforms are far higher than what Meta reports as legal removal orders from the Israeli government. In 2021, the Cyber Unit issued 5,990 content removal or restriction requests, with an 82-percent compliance rate across all platforms.[130] The majority of requests (around 90 percent) were directed to Facebook and Instagram and were issued during the escalation of hostilities in May 2021. That same year Meta reported that it had restricted 291 pieces of content or accounts on Facebook and Instagram based on local law in response to government in Israel.[131]
According to media reports on November 14, 2023, the prosecutor’s office has sent 9,500 content takedown requests since October 7, 2023, to major social media platforms related to the recent hostilities that they allege violate the companies’ policies.[132] Nearly 60 percent of those requests went to Meta. Media reports cite a 94-percent compliance rate for such requests across platforms.[133] Human Rights Watch inquired with the Cyber Unit what company policy the post or account allegedly violated but did not receive a response at time of writing.
IRUs from other countries may also be requesting that Meta and other platforms remove content about the hostilities in Israel and Gaza. The European Commissioner for Internal Market, Thierry Breton, recently sent letters to the heads of major social media platforms, including Meta CEO Mark Zuckerberg, about an increase in “illegal content and disinformation being disseminated in the EU” following the “terrorist attacks carried out by Hamas against Israel.” The letter requested that Meta be “very vigilant to ensure strict compliance with the [Digital Services Act (DSA)] rules on terms of service, on the requirement of timely, diligent and objective action following notices of illegal content in the EU, and on the need for proportionate and effective mitigation measures.”[134]
While 30 digital rights organizations questioned Breton’s interpretation of the DSA contained in the letter,[135] the DSA does provide for the establishment of “trusted flaggers,” to notify platforms about illegal content, notifications that should be processed and decided upon with priority and without delay.[136] The DSA explicitly says that law enforcement agencies can be designated as “trusted flaggers.”[137] While their notices merely allege illegal content, platforms are likely to treat them as an order to remove the content given the significant legal risk they would face in failing to act.[138]
Echoing civil society, the Oversight Board has expressed concern that users whose content is removed based on Community Standards should be informed where there was government involvement in content removal. In an unrelated case, the Oversight Board recommended that Meta ensure users are notified when their content is removed due to a government request under Community Standards violations and that it ensure a transparent process for receiving and responding to all government requests for content removal.[139] Further, it recommended that Meta include information on the number of requests it receives for content removals from governments that are based on Community Standards violations (as opposed to violations of national law), and the outcome of those requests. In August 2021, Meta said it was fully implementing these recommendations.[140]
In the “Shared Al Jazeera post” case,[141] the Board again recommended that Meta improve transparency around government requests that led to global removals based on violations of the company’s Community Standards; in October 2021, Meta said it was implementing this recommendation in part.[142] The BSR report also recommended that Meta disclose the number of formal reports received from government entities about content that is not illegal, but which potentially violates Meta content policies.[143]
Meta’s September 2023 status update on its implementation of BSR’s recommendation says its efforts in this area are in-progress, which it describes as “a complex, long-term project.”[144] Meta said it would “provide an update on the timeline for public reporting of these metrics in a future Oversight Board Quarterly Update and in [its] next annual Human Rights Report.” More than two years after committing to publishing data around government requests for taking down content that is not necessarily illegal, Meta has failed to increase transparency in this area.
Reliance on Automation
Meta’s reliance on automation for content moderation is a significant factor in the erroneous enforcement of its policies, which has resulted in the removal of non-violative content in support of Palestine on Instagram and Facebook.
According to Meta, over 90 per cent of the content deemed to violate their policies is proactively detected by their automated tools before anyone reports it.[145] Automated content moderation is notoriously poor at interpreting contextual factors that can be key to determining whether a post constitutes support for or glorification of terrorism. This can lead to overbroad limits on speech and improper labeling of it as violent, criminal, or abusive.[146] [147]
Meta relies on automation to detect and remove content deemed violative by the relevant platform and reposting of that content, regardless of complaints. It also uses algorithms to determine which automated removals should be prioritized for human oversight, as well as in processing existing complaints and appeals.[148] Meta reported on October 13, 2023, that it was taking temporary steps to lower the threshold at which it takes action against potentially violating and borderline content across Instagram and Facebook,[149] to avoid recommending this type of content to users in their feeds.[150] However, these measures increase the margin of error and result in false positives flagging non-violative content.
Meta does not publish data on automation error rates or on the degree to which automation plays a role in processing complaints and appeals. Meta’s lack of transparency hinders the ability of independent human rights and other researchers to hold its platforms accountable, allowing wrongful content takedowns as well as ineffective moderation processes for violative content to remain unchecked. Processes intended to remove extremist content, in particular the use of automated tools, have sometimes perversely led to removing speech opposed to terrorism, including satire, journalistic material, and other content that would, under rights-respecting legal frameworks, be considered protected speech.[151]
In reviewing hundreds of cases of content removal and the inability of certain users to post comments on Instagram and Facebook, Human Rights Watch found Meta’s automated moderation tools failed to accurately distinguish between peaceful and violent comments. Users reported that their ability to express opinions, including dissenting or unpopular views about the escalation of violence since October 7, is being restricted repeatedly and increasingly over time. As a result of comment removal or restriction, users reported altering their behavior on Instagram and Facebook to avoid their comments being removed. After multiple experiences with seemingly automated comment removal, users reported being less likely to engage with content, express their opinions, or participate in discussions about Israel and Palestine.
Human Rights Implications of Palestine Content Censorship
Content Restrictions and “Shadow Banning”
Article 19 of the International Covenant on Civil and Political Rights (ICCPR)[152] guarantees the right to freedom of expression, including the right to seek, receive, and impart information and ideas of all kinds.[153] This right applies to online expression, as the UN Human Rights Committee has clarified.[154]
The right to freedom of expression is not absolute. Limitations on this right are possible if they are necessary for and proportionate to the protection of national security, public order, public health, morals, or the rights and freedoms of others. Limitations for these purposes must be established in law, not impair the essence of these rights, and be consistent with the right to an effective remedy.[155] The same standard applies to limitations of the rights to freedom of assembly and association.[156]
Unduly restricting or suppressing peaceful content that supports Palestine and Palestinians impermissibly infringes on people’s rights to freedom of expression. Given that social media has become the digital public sphere and the site of social movements, undue restrictions on content and the ability to engage with other users on social media also undermine the rights to freedom of assembly and association, as well as participation in public affairs. The enforcement of content removal policies and adjustments to recommender algorithms, which determine what content people see in their feeds, to significantly limit circulation of content may be perceived as biased or selectively targeting specific viewpoints and could undermine the right to non-discrimination and the universality of rights as well as the right to due process.
Removing or suppressing online content can hinder the ability of individuals and organizations to advocate for human rights of Palestinians and raise awareness about the situation in Israel and Palestine. Content removal that is carried out using automated systems, such as on Instagram and Facebook, raises concerns about algorithmic bias. As this report documents, these systems may result in the erroneous suppression of content, leading to discriminatory consequences without opportunity for redress.
Engaging with content, such as posting or reading comments, is a crucial aspect of social media interaction, especially when open discussion is prohibited and contested in offline spaces. Being shadow banned—where a user’s content is seemingly not visible as usual to their friends and followers, without explanation—can be distressing for users. Meta does not formally acknowledge the practice of shadow banning, effectively denying users transparency, as well as adequate access to complaints mechanisms and meaningful remedy. Social media can be a vital communications tool in crises and conflicts. However, users experiencing or even aware of the risk of account restrictions like shadow banning may refrain from engaging on social platforms in order to avoid losing access to their accounts and vital information, resulting in self-censoring behaviors.
Inability to Appeal to Platform
The UN Guiding Principles on the Business and Human Rights (UNGPs) require businesses to provide access to a remedy where they identify that they have caused or contributed to adverse impacts.[157] This report documents over 300 cases in which users reported and provided evidence of being unable to appeal content removals or account restrictions due to the appeal mechanism malfunctioning. This left them with no effective access to a remedy.
Meta’s temporary measures to lower the threshold at which it takes action against potentially violating and borderline content across Instagram and Facebook to avoid recommending this type of content to users in their feeds is likely to increase the margin of error for removal or suppression of content and leave the user without the ability to remedy because the user is not informed of any action taken on their account or content.
Meta told Human Rights Watch that it is aware that the temporary measures it takes during conflicts could have unintended consequences “like inadvertently limiting harmless or even helpful content” and also admitted that“[d]uring busy periods, such as during conflict situations, we may not always be able to review everything based on our review capacity.” [158] Meta also disclosed that “appeals for content demotions are currently not available outside of the EU.”
The lack of effective remedy for incidents of censorship can have significant implications for individuals and groups. Their right to freedom of expression, as outlined in international human rights instruments, may be violated.
III. Social Media Companies’ Responsibilities
Under the United Nations Guiding Principles on Business and Human Rights (UNGPs), companies have a responsibility to respect human rights by avoiding infringing on human rights, identifying and addressing the human rights impacts of their operations, and providing meaningful access to a remedy.[159] For social media companies, this responsibility includes aligning their content moderation policies and practices with international human rights standards, ensuring that decisions to take content down are not overly broad or biased, being transparent and accountable in their actions, and enforcing their policies in a consistent manner.
UNGPs require companies to carry out human rights due diligence to identify, prevent, mitigate, and account for how they address their adverse human rights impacts. Companies should communicate externally how they are addressing their human rights impacts, providing sufficient information so that stakeholders can evaluate the adequacy of their response. Meta’s Corporate Human Rights Policy outlines its commitment to respecting human rights as set out in the UNGPs.[160] As a member of the Global Network Initiative (GNI),[161] Meta has also committed to upholding the GNI Principles on Freedom of Expression and Privacy.[162]
The Santa Clara Principles on Transparency and Accountability in Content Moderation provide important guidance for how companies should carry out their responsibilities in upholding freedom of expression.[163] Based on those principles, companies should clearly explain to users why their content or their account has been taken down, including the specific clause of the Community Standards that the content was found to violate.
Companies should also explain how the content was detected, evaluated, and removed—for example, by users, automation, or human content moderators—and provide a meaningful opportunity for timely appeal of any content removal or account suspension. Meta has endorsed the Santa Clara Principles[164] but has not fully applied them.
IV. Recommendations
To Meta (Instagram and Facebook)
Dangerous Organizations and Individuals (DOI) Policy
- Overhaul the DOI policy so that it is consistent with international human rights standards, in particular, to ensure that Meta platforms permit protected expression, including about human rights abuses, political movements, and organizations that Meta or governments designate as terrorist.
- Instead of relying primarily on a definition of terrorist entities or dangerous organizations, refocus the policy on prohibiting incitement to terrorism, drawing on the model definition advanced by the mandate of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism.[165]
- To the extent that a revised policy includes a definition of terrorist entities or dangerous organizations, do not rely exclusively on the lists of particular states in determining whether to bar an organization.
- Publish Meta’s list of Dangerous Organizations and Individuals.
- Clarify which of the organizations banned by Israeli authorities are included under Meta’s Dangerous Organizations and Individuals policy.
- Ensure accounts that trigger violations under the DOI policy are subject to proportionate penalties, given the propensity of this policy to erroneously flag protected expression, including about human rights abuses.
Government Requests
- Improve transparency around voluntary government requests to remove content based on Community Standards and Community Guidelines from Israel’s Cyber Unit and other internet referral units.
- Notify users if a government was involved in their content being taken down based on a policy violation, and provide a transparent appeal process for the decision.
- Meta should include in its periodic transparency reports:
- Number of requests per country (broken down by government agency).
- Compliance rate per country.
- The relevant company policy the post or account allegedly violated.
- Compliance rate per policy.
Newsworthiness Allowance
- Conduct an audit to determine error rates concerning the removal of content that is of public interest and should be retained on Meta’s platforms under its newsworthiness allowance. This audit should also assess whether Meta is applying the newsworthiness allowance equitably and in a non-discriminatory manner.
- Improve systems to identify and allow listing pieces of content that are newsworthy but are repeatedly removed erroneously.
Automation
- Improve transparency about where and how automation and machine learning algorithms are used to moderate or translate Palestine-related content, including sharing information on the classifiers programmed and used, and their error rates.
- Conduct due diligence to assess the human rights impact of temporary changes in Meta’s recommendation algorithms in response to October 7, and share those findings publicly. This assessment and reporting should become standard practice whenever Meta applies temporary measures in crisis situations.
- Integrate the human-in-the-loop principle, wherein humans have a role in the ultimate decision-making process, for meaningful oversight of decisions made by Artificial Intelligence (AI) tools. This is also consistent with the UN Guiding Principles on Business and Human Rights (UNGPs), which require companies to set up internal accountability mechanisms for the implementation of policies and facilitate the right to remedy.
Transparency and Access to Remedy
- Provide users with adequate information when notifying them that their account or content has been restricted, including:
- The specific content or behavior that violated Meta’s Community Guidelines, including the specific clause of the Community Guidelines that their content was found to violate and how the content was detected and removed (for example, flagged by other users of automated detection).
- The restriction placed on their account or content, including when their account or content has been removed or downgraded in recommender algorithms.
- How the user can appeal this decision.
- Ensure that all appeal mechanisms are accessible, functional, and available to all users, regardless of jurisdiction.
- Commission and publish an external audit into shadow banning with an aim towards improving public understanding of what changes Meta has made to its recommender systems, content ranking, and the penalty system, and its impact on freedom of expression.
Human Rights Due Diligence
- Solicit feedback from civil society and other relevant stakeholders on Meta’s implementation of commitments made in response to the BSR report and the Oversight Board to inform its own assessment of progress made.
- Work with civil society and other relevant stakeholders to timeline implementation of outstanding commitments based on urgency.
Preservation
- Preserve and archive material of human rights violations and abuses that may have evidentiary value, and provide access to data for independent researchers and investigators, including those in the fields of human rights, while protecting user privacy.
Acknowledgments
This report was researched and written by Deborah Brown, acting associate director in the Technology and Human Rights division, and Rasha Younes, acting deputy director in the Lesbian, Gay, Bisexual, and Transgender (LGBT) Rights program at Human Rights Watch.
Tamir Israel, senior researcher in the Technology and Human Rights division, and Eric Goldstein, deputy director of the Middle East and North Africa division provided divisional reviews for this report. Omar Shakir, Israel and Palestine Director; Anna Bacciarelli, acting associate director in the Technology and Human Rights division; Arvind Ganesan, director of the Economic Justice and Rights division; Letta Tayler, associate director in the Crisis and Conflict division; Brian Root, senior researcher in the Digital Investigations division; Belkis Wille, associate director in the Crisis and Conflict division; Benjamin Ward, deputy director in the Europe & Central Asia division; and Abbey Koenning-Rutherford, fellow in the United States Program provided specialist reviews. Maria McFarland Sánchez-Moreno, acting deputy program director; Tom Porteous, deputy program director; and Michael Garcia Bochenek, senior legal advisor provided programmatic and legal review.
Contributions to sections of this report were made by Ekin Ürgen, associate in the Technology and Human Rights division; Hala Maurice Guindy, research assistant; and Yasemin Smallens, senior coordinator of the LGBT Rights program.
Hina Fathima, producer in the Multimedia division, produced the video accompanying the report. Racqueal Legerwood, senior coordinator of the Asia division provided editorial and production coordination and formatted the report. Additional production support was provided by Travis Carr, digital publications officer. This report was prepared for publication by Jose Martinez, administrative officer, and Fitzroy Hepkins, administrative senior manager. The report was translated by a senior Arabic translation coordinator.>
External legal review was provided by Elizabeth Wang, founder of Elizabeth Wang Law Offices.
Human Rights Watch also benefited greatly from expert input from and collaboration with 7amleh, Access Now, and Amnesty International.
Human Rights Watch is grateful for all those who shared their experiences with us.