GLAAD statement says Meta allows pro- transgender hate to “flourish” on its systems

The LGBTQ+ advocacy group asserts that the social media company violates its own policies by removing anti-trans content, leading to “well-documented real-world harms.”

By Gaby Del Valle, a freelance writer. Her previous work has focused on immigration policy, border security systems, and the rise of the New Right.

Meta billboard outside its headquarters

GLAAD, the nation’s largest LGBTQ+ media advocacy group, alleges that Meta’s content moderation system is enabling an “epidemic of anti-trans hate” to flourish on its platform. According to a recent report from the organization, Meta has permitted numerous anti-trans posts, including those advocating for violence against individuals, to remain online. The organization states that LGBTQ+ individuals “experience an increasing number of well-documented real-world harms” due to “propaganda campaigns, fueled by the anti-LGBTQ+ extremists that Meta permits to thrive on its platforms.”

The report documents several instances of anti-trans content posted on Facebook, Instagram, and Threads between June 2023 and March of this year, all of which GLAAD reported through Meta’s “standard platform reporting systems.” Some of the posts featured hateful anti-trans slurs, while others, such as an Instagram post depicting a person being assaulted with stones replaced by laughing emojis, refer to transgender individuals as “demonic” and “satanic.” Several posts accuse transgender individuals of being “sexual predators,” “perverts,” and “groomers,” the latter of which has increasingly become an anti-LGBTQ+ slur in recent years.

According to GLAAD’s findings, Meta often fails to remove posts that violate its own hate speech policies. Meta reportedly responded that the flagged posts were not in violation or simply did not take action against them after GLAAD reported posts that breached the organization’s hate speech policies.

Some of the posts were made by prominent accounts, including Libs of TikTok, operated by far-right influencer Chaya Raichik. Raichik has become involved in conservative school board politics in recent years and was appointed to Oklahoma’s state library advisory committee in January. According to the report, a “prominent anti-LGBT extremist account” in Kitsap, Washington, targeted a gender nonconforming elementary school teacher with Facebook and Instagram posts before the school received bomb threats. The account is identified as Raichik’s in the report’s news article.

In a statement to The Washington Post, GLAAD stated that “Meta itself acknowledges in its public statements and in its own policies that hate speech fosters an environment of intimidation and exclusion and may lead to offline violence.”

In a 2022 Media Matters report, Meta was found to have run at least 150 ads on its platforms accusing users of being “groomers” and profiting from it. Making unsubstantiated accusations that LGBTQ+ individuals are groomers violates its hate speech policies, Meta told the Daily Dot that year. The “Gays Against Groomers” Facebook account was suspended by Meta last September but was later reinstated. Meta attributed the suspension to a platform error, according to the Daily Dot.

In January, Meta’s Oversight Board overturned the company’s decision not to remove a post encouraging transgender individuals to commit suicide. The board noted that the post had been reported 12 times by 11 different users, but Meta’s automated systems only subjected two of those reports to human review. Both reviewers “assessed it as non-violent and did not escalate it further.” The board only took up the appeal after it was filed.

The issue, the board asserted, was not that Meta lacked adequate policies against hate speech but that it failed to enforce them. The board found that the individual behind the original post had previously harassed transgender individuals online and had created a new Facebook account after being suspended. “Meta’s repeated failure to take the appropriate enforcement action, despite multiple signals about the post’s harmful content, leads the Board to conclude the company is not living up to the ideals it has articulated on LGBTQIA+ safety,” the board wrote.

Meta did not respond to a request for comment immediately.