Oversight Board Calls Out Meta for ‘Failing’ to Protect LGBTQ People Online

On social media, attacks on LGBTQ people aren’t always as direct as calling someone a name or threatening to hurt them — obscuring attacks in coded language, jokes, and dog whistles is a common method people use to try to get around the rules. “There are a lot of coded references, such as the number 41 percent — oftentimes used as saying a trans person should ‘go 41-percent themselves’ — a reference to the number of trans people that attempt suicide,” says Alejandra Caraballo, a clinical instructor at Harvard Law School and LGBTQ rights advocate. Another popular method is intentionally dead-naming or misgendering trans people, according to Jenni Olson, senior director of social media safety at LGBTQ advocacy nonprofit GLAAD. “We’re talking about targeted misgendering and dead-naming, not accidentally getting someone’s pronouns wrong,” she says.

In April 2023, a Facebook user in Poland posted a picture of a striped curtain the colors of the trans pride flag: blue, pink, and white. Text overlaid on the photo said in Polish, “New technology. Curtains that hang themselves.” Another block of text said, “Spring cleaning <3.” The user’s bio stated, “I am a transphobe.” Despite multiple reports and appeals by users, it stayed up — until last fall, after Meta’s independent oversight board took up a case about the post.

Now, based on that post, the board has told the social media giant for the first time that it needs to step up the way it protects the real-world safety of LGBTQ people — specifically, by better enforcing its existing policies on hate speech and references to suicide. “What we’re asking for is greater attention to the treatment of LGBTQIA-plus individuals on the platforms, because, as this case demonstrated, there is virulent and unacceptable discrimination against the community on social media,” oversight board member and constitutional law professor Kenji Yoshino tells Rolling Stone. “Social media is a place where LGBTQIA-plus individuals go for safety, often when they’re worried about navigating physical space, so the idea that there would be this level of virulent hatred towards them is all the more painful and unacceptable.”

In a decision published Tuesday, the board wrote, “Meta is failing to live up to the ideals it has articulated on LGBTQIA+ safety. The Board urges Meta to close these enforcement gaps.” The board further said that Meta had repeatedly failed to crack down on attacks on the LGBTQ community that employ coded references or satirical memes to get around moderators, a practice it refers to as “malign creativity,” and that the company was failing to enforce its policies regarding hate speech and references to suicide on its platforms. The board suggested Meta shore up its enforcement policies to stop anti-LGBTQ hate from proliferating on its platforms.

Launched in 2020 and populated by an impressive roster of experts in human rights, free speech, government, law, and ethics, the oversight board has been likened to a Supreme Court or Human Rights Tribunal of Facebook and Instagram. It’s a separate entity from Meta, and it issues decisions on the appropriateness of moderation choices made by the social media behemoth. It has the power to overturn Meta’s decision on whether content was removed or allowed to remain online. In addition to publishing decisions on content, the board also issues recommendations for how Meta can adjust its policies to balance online safety with users’ freedom of expression. This case is the first time the board has issued recommendations to Meta to better protect LGBTQ people from real-world harm.

On Tuesday, Meta issued a statement saying it welcomed the board’s decision. “The board overturned Meta’s original decision to leave this content up. Meta previously removed this content so no further action will be taken on it,” the statement said in part. Meta must respond to the board’s recommendations within 60 days.

“The case illuminates Meta’s failure to enforce their own policies, which is something that we have been pointing out for years, so it’s very validating and gratifying to see that expressed by the board,” says Olson, of GLAAD. The organization submitted a statement during the oversight board’s public comment period on this case. “We found there are very real resulting harms to LGBT people from this kind of hateful conduct online,” Olson adds, citing GLAAD’s annual Social Media Safety Index report.

“The post is a reference to the high rate of suicide attempts among trans people, so it’s a joke mocking trans people for suicide,” says Caraballo, who helped GLAAD draft its public comment, “which is explicitly against the community guidelines, but something their automated systems would not be able to understand the nuance of.” The board also pointed to the reference to “spring cleaning” with a heart, along with the user’s assertion that they’re a transphobe as signs that should have been pieced together to understand the intention of the post.

The picture of the curtain didn’t just elude the automated review system. According to the board’s report, although the post only got around 50 reactions, 11 users reported it for violating either the hate speech or suicide and self-injury community standards — which forbid attacks against people on the basis of their race, gender identity, or other “protected characteristics;” and the celebration, promotion, or encouragement of suicide. Some of those reports got routed to actual humans, but it was found not to be in violation of the policies and was allowed to stay up, despite multiple appeals by the users who reported it. After the oversight board announced it would take up the case in September 2023, Meta removed the post permanently — and banned the user for prior violations.

Some users on Meta’s platforms contend with attacks like this one every day. Alok Vaid-Menon, a gender-nonconforming comedian and poet with 1.3 million followers on Instagram, says they are often the target of “animus” online. “Despite the fact that this vitriol often violates Meta’s own policies, it is allowed to persist,” they say. “My peers and I have countless stories of reporting violent threats that never get addressed. This rampant lack of accountability and platforming of anti-LGBTQ extremism endangers our safety on and offline, and has contributed to escalating anti-trans discrimination all over the world.”

Caraballo agrees. “Meta has consistently failed to hold anti-LGBTQ and particularly anti-trans content as violative of its community standards, and continuously let some of the worst anti-LGBTQ accounts target trans people, misgender them, and push horrible conspiracy theories that LGBTQ people are groomers.”

The oversight board suggested the company clarify its enforcement procedures to specify that a flag representing a protected group of people can represent individuals in that group — and that an image doesn’t need to depict a human figure for it to represent those people. “The real mystery of this case is why there was such an enormous gap in between the policies as they were stated and their enforcement,” Yoshino says. “The view of the board was that Meta has the right ideals, it’s just living under those ideals, rather than living up to them. And what we get to do that I don’t think an ordinary member of the public gets to do is to say to Meta, ‘you need to do better.’” In its recommendations, the board further suggested Meta enhance training for content reviewers. It offered several ideas, including ramping up training specific to gender identity issues, creating a task force on transgender and non-binary people’s experiences on Meta’s platforms, and assembling a group of experts to review content impacting the LGBTQ community.

Vaid-Menon echoes the need to involve the LGBTQ community in efforts to improve moderation. “This incident should have been addressed by Meta’s moderation team,” they say, referring to the curtain post. “The fact that it wasn’t speaks to the continued failures of the company to implement its own hate speech policies. It doesn’t matter what’s written when it’s not actually enforced. What is needed going forward is for Meta to provide better training and guidance to its moderators on anti-trans hate. This can best be done in partnership with trans and gender nonconforming creators and organizations.”