Lady Dariana, a trans woman from Ecuador, immigrated to the United States in 2022 seeking safety and protection after facing life-threatening circumstances in her home country.
“I wanted a more peaceful existence—after receiving death threats from liberal groups associated with mafias in my country—where I could be myself without fear,” said Lady Dariana, 33, who asked that her last name not be shared for her safety.
However, she encountered discrimination throughout her journey because, like the 33 other transgender and nonbinary individuals interviewed for this story, she found herself navigating increasingly hostile social climates.
According to experts, the growing reliance on artificial intelligence-fueled systems, which have already demonstrated a disproportionate impact on minority groups, serves to perpetuate that hostility by ensuring that security checkpoints and identification protocols adhere to a binary definition of gender, leaving little room for those who don’t fit the criteria.
“Algorithms are not neutral,” said Liaam Winslet, an Ecuadorian-born advocate with the Queens-based lobbying group Transgrediendo, which advocates for healthcare and individual rights for trans, nonbinary, gender expansive, and transgender individuals in New York. “They are created by people who have biases, and those biases are translated into code that impacts entire communities, particularly the trans and LGBTQ+ communities in public.”
Unsafe Section
Despite her Spanish ID confirming her gender as female, Lady Dariana said her ordeal began when she was detained at a border control facility in Reynosa, Mexico, where she was placed in a cell for men. Since 2016, Ecuadorian law has permitted the modification of gender markers to align with gender identity.
Eventually, she was able to leave the facility and soon arrived in the U.S. She stated that U.S. immigration officers also questioned her ID but allowed her entry into the country.
Lady Dariana recounted traveling from the border to New York City via McAllen, Texas. After passing through the TSA checkpoint’s full-body scanner, agents asked her to step aside for additional screening, she said.
Because they spoke English, she felt they were singling her out for being transgender, as agents kept staring at the monitor displaying her body scan and gesturing towards the area near her genital region.
“They also asked if I was ‘carrying drugs or something,'” Lady Dariana said. “I wasn’t carrying anything; I had just left the border detention center.”
Lady Dariana stated that she was pulled aside for a pat-down by TSA agents. She claimed that, as allowed by TSA policy, she requested to be searched by a female officer. Instead, two male officers were assigned to search her, touching her crotch and groping her chest in the process, she alleged.
Such experiences are reportedly all too common for transgender and nonbinary travelers, who filed 5 percent of all complaints about mistreatment by TSA agents between 2016 and 2019—despite making up an estimated 1 percent of the population, according to a ProPublica analysis.
As of June 2023, the TSA announced that it would transition to a gender-neutral, artificial intelligence-driven screening system intended to facilitate transgender and nonbinary travel. Advocates for the change, including the American Civil Liberties Union, who had long been concerned about issues with body scanning technology, welcomed the change.
However, the changes came alongside another controversial introduction of AI technology at airports—including by Customs and Border Protection, which rolled out biometric facial matching technology to scan passengers in more than 200 airports in the U.S. Among them: all airports with international departures.
The ACLU has sued Customs and Border Protection as well as other government agencies to obtain details on their use of facial recognition technology and surveillance, saying the measures pose a threat to individual safety and privacy, particularly for marginalized communities.
These organizations have abused or continue to abuse surveillance agencies to spy on protesters, political opponents, Black and Brown communities, and more. The ACLU wrote, “Adding face recognition to their arsenal raises serious cause for alarm.”
Coding the Binary
Private and governmental officials are increasingly reliant on AI to perform basic daily tasks. As of November 2023, the technology was used in more than 700 different federal government initiatives, according to AI.gov, a site established by the White House Office of Science and Technology Policy.
Examples include using AI software to “recognize objects, people, and events in any image or video stream” for customs and border security. The Department of Homeland Security’s AI Use Case Inventory claims that it can identify “objects, people, and events of interest” once trained.
Smartphone manufacturers like Apple and Samsung, as well as private businesses, allow customers to unlock their phones using AI facial recognition technology, while banks like Bank of America use facial recognition to allow customers to log into their mobile apps. Through Amazon Go, Amazon offers a cashier-free shopping service that uses facial recognition.
The White House’s AI.gov promises that “The federal government is also setting strong guardrails to ensure its use of AI keeps people safe and doesn’t violate their rights.”
‘Reinforcing Stereotypes’
With a 98.3% success rate, research indicates that AI facial recognition technology has a high success rate for correctly identifying cisgender people—those who identify with the gender they were assigned at birth. However, the failure rate was 38% when identifying transgender people, according to research from the University of Colorado at Boulder. According to the researchers, systems built on AI have surprisingly low rates of recognition for non-binary people or other genders.
“These systems run the risk of reinforcing stereotypes of what you should look like if you want to be recognized as a man or a woman. And that has an impact on everyone,” said Morgan Klaus Scheuerman, a researcher at UC Boulder.
Scheuerman pointed out that the algorithm mistook him for a woman, presumably due to his long hair. He and his fellow researchers, Jacob Paul and Jed Brubaker, say technology has not evolved at the same pace as modern understandings of gender. A more complex and gender-diverse coding approach, they argue, would be required to change algorithms to handle a wider range of identities.
I have fought and will continue to fight, and I am not afraid of them. Since I was born, I have been a survivor of the system. Josemith Gomez
I have fought and will continue to fight, and I am not afraid of them. Since I was born, I have been a survivor of the system. Josemith Gomez
Training an AI model involves analyzing how a computer processes a large amount of data before entering it into a computer and observing how the computer constructs a model. But trans and nonbinary people often fall outside of these algorithmic models, which focus on a binary understanding of gender.
Experts claim that the ability to recognize diverse gender identities, such as transgender or nonbinary people, is compromised when artificial intelligence algorithms are trained with data sets that follow the binary approach to encoding.
When we consider how AI operates, we must consider how machine learning systems are created, according to Meredith Broussard, a professor and data journalist at New York University who studies technology bias.
“The mathematical patterns in the data are the same as those in the world. Which is to say there are patterns of discrimination,” added Broussard, author of the books “More Than a Glitch” and “Artificial Unintelligence. So any form of discrimination that exists in the world or has historically existed is likely to be reflected in the data used to train the system up.”
Growing Hostility
An estimated 1.6 million transgender youth and adults are present in the United States. The Williams Institute estimates that only a small portion of the 336 million population will exist in 2022.
Nearly 500 bills aimed at transgender children and adults have been introduced in 41 states as of late February 2024, including those that aim to outlaw changing legal documentation to be gender-compatible and those that prohibit trans people from using bathrooms to be gender-compatible, according to the Trans Legislation Tracker, an independent compilation of state-level laws.
The bills follow last year’s uptick in anti-trans legislation, which saw three times as many measures introduced as the previous record, the group found.
In the United States, at least 32 transgender people were killed in 2023, compared with 41 violent deaths recorded in 2022, 59 in 2021, and 45 in 2020, the Human Rights Campaign estimated. According to the HRC, trans people of color made up the majority of the victims. The group claims that the figures are likely an undercount because many murdered trans people are not properly identified in media or police reports.
According to an FBI report, 469 hate crimes against trans people were committed in the U.S. in the previous year.
Compounding Existing Biases
Transgender activists and activists worry that AI will only make the hostility from law enforcement personnel worse.
“We are like trash for the police,” according to Honduras transgender woman Kenia Orellana. Orellana said she suffered discrimination and violence from law enforcement officers not only in Honduras, but also in the U.S. “They stop me on the street just because I look it,” they say. “They ask me what I’m doing at night, or if I’m carrying drugs.”
Fifteen transgender people died in the United States between 2013 and 2022, according to Human Rights Watch, while law enforcement officers are abused while in jail, prison, or immigration customs enforcement detention facilities.
“I don’t want to be watched, to be tracked, or to feel like someone is watching me, but it’s something they have always done,” said trans woman Josemith Gomez. (PHOTO/Courtesy Josemith Gomez)
Josemith Gomez, 22, claimed her family kicked her out as a teenager because she was transgender and that she was born in Venezuela. Thirty-one of the 34 trans women who spoke to this story claimed their families had turned them down.
Gomez came to the U.S. risking the dangerous passage through the Darien Gap in 2022. She has felt safer since arriving in New York, where she now lives in Queens. She said discrimination still persisted. She claimed that she was in an abusive relationship with a man in the Bronx less than a year ago when he threatened to call ICE and have her deported.
Instead, she called the police, but said officers did nothing to intervene when the man threw her belongings out of a sixth-floor window.
“The police just told me to pick up my things and leave.”
Gomez continues to wonder if the police never did anything because they are a woman, transgender woman, immigrant, or Hispanic. “I don’t know, and maybe I never will.”
Dangers of Facial Recognition
Many of the 34 trans individuals interviewed for this story said they’ve experienced negative initial encounters with facial recognition software.
A transgender woman who immigrated from Mexico, Erika Lopez, 33, claimed that because AI facial recognition software was inoperable, she has repeatedly been turned away from her banking app.
“My appearance has significantly changed since my gender transition, but the information stored in the system is still based on how I looked before,” Lopez told the NYCity News Service. “This happens all the time … it is very frustrating, especially when I am trying to perform important transactions at the bank, especially if someone is watching me.”
Kenia Orellana, a native of Honduras, claimed she was also prevented from using biometric identification to unlock her Apple iPhone.
“My phone sometimes fails to recognize me correctly, especially if my appearance changes due to makeup or hairstyle,” Orellana said. “I worry about getting blocked because I rely on my phone for so much of my daily activities. It has occurred to me several times.”
Some transgender people worry that these issues with apps and phones are a sign of even greater forms of bias. AI has already demonstrated a disparate impact on minority groups, with issues ranging from racial discrimination in resume screening to disparate insurance rates and concerns with the use of facial recognition technology for surveillance.
“Algorithms are not neutral.” — Transgrediendo, Liaam Winslet
According to Albert Fox Cahn, Executive Director and Founder of the Surveillance Technology Oversight Project (STOP), “I think the growth of AI discrimination demonstrates that we are not doing enough because we continue to see more products come to market that harm the public through the same biased, invasive, error-prone algorithms to make life-altering decisions.” “And I think we know that the risk is most acute for those who have often been most systematically marginalized by large institutions and governments, including trans people.”
For instance, activists have voiced concerns about police attending the annual LGBTQ Pride march, going so far as to forbid the NYPD and other law enforcement from wearing uniforms until 2025. Activists say that AI could give officials a dangerous tool to identify those who attend LGBTQ events and criminalize them.
According to Fox Cahn, “I think facial recognition is really concerning because it means you can potentially identify thousands of people with one photo,” he said.
Examples of bias in AI technologies have been investigated in a number of studies, including those conducted by independent tech advocates like Timnit Gebru’s Distributed AI Research Institute and government experts like the National Institute of Standards and Technology (NIST).
“Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system,” NIST researchers wrote in a 2022 publication, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.”
The ‘The Scale of Damage’
The NIST report warned that the scope of bias potentially perpetuated by AI is more of a threat than individual human bias because it can happen without transparency or accountability — and at a massive scale.
The harmful effects of AI are not just at the individual or business level, but they also have a potential impact on society as a whole. The authors of the report argued that a concerted effort is required to address the scale of damage and the rapid rate at which it can be caused by AI applications or by the expansion of large machine learning models across fields and industries.
After the U.S. Capitol attack on January 6, 2021, law enforcement agencies turned to Clearview AI, a private company whose controversial AI-driven facial recognition software harvests millions of photos from social media, police mugshots, and other sources.
The FBI, Immigration Customs Enforcement, and the Department of Homeland Security are just a few examples of the law enforcement agencies that make use of the company’s app. According to Hoan Ton-That, the company’s CEO, searches increased by 26% in the days following the Capitol attacks.
Ton-That told NYCity News Service in a statement that the company aims to eliminate any bias in its algorithm. Ton-That added that the algorithm used by Clearview AI received a 99.85% accuracy in matching the correct face to the face in all demographics from a collection of 12 million photos.
Ton-That did not address how the software performs with regard to identifying transgender and nonbinary individuals, or what actions the company is taking to reduce bias in gender identification.
Standards for the Future
Everyone should be concerned about how officials are secretly integrating AI into daily life, according to experts.
Winslet, the Transgrediendo activist, is concerned about intensifying surveillance of minority groups, especially the African-American, Latinx, and transgender communities.
In addition to a robot that has been temporarily assigned to patrol the Times Square subway station and a set of “digidog” robots that have been used by the NYPD since 2020, Winslet said, “We are seeing a clearly discriminatory approach by law enforcement, especially with the implementation of robots equipped with cameras in specific areas of New York.”
It is crucial that trans people and other minorities are actively involved in the development of AI, according to Baby Salcedo, a nationally recognized trans leader who has visited the White House. That could include working groups, hiring trans and minority developers, and testing new software with an eye on eliminating bias.
Salcedo and others interviewed also believe that protecting trans rights in the context of artificial intelligence and surveillance will require government transparency and clear regulations.
“No matter what happens,” Josephine Gomez said, “the future will likely require the same resistance and perseverance that trans people have used to get this far.”
“I don’t want to be watched, to be tracked, or to feel like someone is watching me,” Gomez says, “but it’s something they’ve always done.”
Still, she reflects, “I am not afraid of them. I have fought and will continue to fight. Since I was born, I have been a survivor of the system.”