May 17, 2021 | |
---|---|
topic: | LGBT Rights |
tags: | #LGBTQI rights, #trans rights, #facial recognition, #European Union |
located: | Germany, Belgium, USA |
by: | Yair Oded |
This software, which attempts to classify people as either ‘male’ or ‘female’ based on their facial features, the way they sound and the manner in which they move, places those whose gender doesn’t match the sex they were assigned at birth at great risk of further marginalisation, exclusion and discrimination. Harnessing the rising ubiquity of AI systems, automated gender recognition technology also threatens to reinforce outdated social taboos and stereotypes surrounding gender and effectively erase anything existing outside of the crudest binary perception of ‘male’ and ‘female’.
As the EU embarks on a legislative process of regulating the use of AI within the Union, a joint campaign launched by All Out, Access Now, Reclaim Your Face and Os Keyes is calling on the EU to include an explicit ban on automated gender and sexual orientation recognition in the bill.
On 21 April, the EU Commission - the executive branch of the EU - delivered its proposal for a legal framework to regulate AI. While it did highlight the inherent risks of some AI applications, the Commission did not go as far as prohibiting the deployment of automated gender recognition. The joint campaign to ban the technology, which so far has gained over 24,000 signatures, will now place its focus on the EU Parliament and Council, which are slated to continue working on the AI regulation bill.
The campaign originally stemmed from Keyes’ research about gender recognition systems and their impact on trans and nonbinary people. “I was prompted to study these gender recognition algorithms by having to see them used in my own discipline [...] seeing people use it for research purposes and as a consequence producing research that cut out people who these systems cannot recognise,” Keyes told Screen Shot. “As I got in further,” they added, “I got to see more examples of it being used and deployed in the real world and a lot of people talking about deploying it further in situations that seem very, very dangerous for trans and gender non-conforming people.”
Keyes’ research was then referenced in the EU’s five-year LGBTI strategy, in a passage pointing out the danger in deploying automated gender recognition.
When Yuri Guaiana, senior campaign manager at All Out - an international LGBTQI advocacy organisation - came across Keyes’ quote in the EU’s LGBTI strategy he became fascinated with the topic and upon further research had launched a campaign to pressure the EU to ban automated gender and sexual orientation recognition. To that end, All Out joined forces with Access Now, an NGO advocating for a human rights-based regulation of AI, and Reclaim Your Face, a citizen initiative to ban biometric mass surveillance in the EU. They also got the endorsement of Keyes, who signed the letter submitted to the EU Commission along with the petition.
Speaking to Screen Shot, Keyes mentioned various existing applications of automated gender and sexual orientation recognition and highlighted some of the risks this technology poses for trans and gender non-conforming people.
One of the examples they referenced was a campaign by the Berlin Metro on International Women’s Day 2019, where women could pay 21 per cent less than men for a ticket. In order to authenticate rider’s gender, automated gender recognition software was embedded in ticketing machines; those who failed to be recognised as female by the system were instructed to seek help from a service person at the station.
Keyes has pointed out two main issues in this case: “the first is the fact that you are being told ‘no you do not fit’,” they said. “The second is this idea of ‘well you can just go talk to an employee and they’ll work it out for you’,” they added. “Queer and trans people do not have the best experiences going to officials going ‘hey, just to let you know, I don’t fit, and I’m not meant to be here, and can you please fix this’. And when we think about the proposed deployments in places like bathrooms, you can see pretty clearly how that could get a lot more harmful and difficult.”
Keyes also mentioned the growing use of this technology in advertising, including on physical billboards that curate ads based on the perceived gender of the person walking past it: cars for men, dresses for women, and so on. Keyes pointed out that beyond the harm this application of automated gender recognition could cause trans and non-binary people, it also circulates incredibly negative and limiting social messages pertaining to gender: “This is what you’re allowed to do with gender, this is who you can be, this is what you can buy,” they said.
Yuri Guaiana of All Out seconds this analysis. “How are you assuming that just because of your gender you are interested in certain products?” he said, highlighting that “interests are more important than gender in consumer behaviour.”
But Keyes emphasised the particular trauma this type of advertising can inflict on trans and gender non-conforming people. To them, the high potential of such advertising tools to misgender people who do not ‘fall neatly’ into either gender category and its implied message that they simply do not fit embody a blatant manifestation of transphobia. “What [transphobia] actually looks like is lots of small interactions [...] it’s a death of a thousand cuts.” Keyes said. “And this is something I think anyone who is trans experiences on a day-to-day basis, like the constant small harms.”
Another application of the technology, which Keyes maintains is rarer but certainly existent, is in passport biometrics and various authentication systems. In this type of deployment, automated gender recognition is used to try and reduce the number of face images the given machine has to sort through in order to confirm the person’s identity.
“The problem with this is if it gets it wrong, one way or the other, then what you get is the system concluding that this person does not appear in the database even though they do, and [...] someone [could be] locked out of the system for being gender non-conforming,” Keyes said, adding that the secrecy with which this technology is shrouded and the lack of transparency regarding where, when and how it is being deployed amplifies its risk.
“We know that everyone is talking about doing it, and they most certainly are, but we can’t tell where and we can’t tell which discriminatory outcomes are caused by this,” they said, referencing a case where a trans woman’s identity could not be verified by Uber’s algorithm. “That could look a hell of a lot worse if we were talking about places like, again, biometrics, border control, passport security systems; places where you have much fewer rights or abilities to appeal if you can’t even work out what the system is not recognising about you in the first place [...] and where the consequences of forced interactions with officials can be much more strenuous.”
Delineating the broader harm automated gender and sexual orientation recognition can inflict, Guaiana of All Out mentioned that the use of this technology could prove life-threatening in countries where being LGBTQI is illegal. “If they are using [automated gender and sexual orientation recognition] in places where being gay is illegal, and they can predict with a huge margin of being wrong that somebody rallying against something or walking in the street is gay—that can have very serious consequences,” Guaiana said. “This technology is used by government agents, but also private companies. It is censorship. Because in certain countries […] they could start surveilling people just because they predicted they are LGBTI.”
After reading over the EU Commission’s proposal last week, Guaiana, as well as other members of the campaign, noted that despite listing some applications of AI that should be prohibited, the Commission did not go as far as it should have in calling for a ban on harmful AI technologies that violate fundamental rights. “There is no explicit - or implicit, for that matter - ban on automatic recognition of gender and sexual orientation. For us, of course, this needs improvement,” Guaiana told Screen Shot.
But All Out and its partners are far from discouraged. “Of course we would have preferred very much for the Commission to put [the ban] in the initial draft,” said Guaiana, “but I think it’s going to be a lengthy legislative process, [and] it’s still a good starting point [...] There is still room to grow the campaign, keep the pressure up, and finally win this battle.”
Once more signatures are gathered and the legislative agenda and timeline of the EU Parliament and Council become known, the campaign to ban automated recognition of gender and sexual orientation will direct its resources at the Union’s representatives, recognising that they have the authority to amend the Commission’s recommendation and introduce the ban into the bill.
Guaiana and the other organisers of the campaign all believe that a ban on this particular type of technology in the EU could possibly have a global ripple effect, as did the General Data Protection Regulation (GDPR) back in 2016. Such a prohibition, says Guaiana, could “Help forbid the EU not only from implementing this technology within the EU, but also from exporting it [...] and therefore that can help slow down the spread of this technology around the world.”
As we tackle the behemoth that is the tech industry, and as we try to regulate the application of various AI technologies and their deployment by both governments and companies, it is easy to feel powerless in the face of their seemingly inexorable force. Keyes, however, offers a slightly more optimistic - though pragmatist, as they define it - take on the issue. “I happen to believe that people thinking they can’t interfere [with technological development] is why interfering hasn’t worked thus far,” they said, “and there are a lot of examples that we don’t necessarily think about of technologies being banned in ways that did seriously derail things. Like, I’m a trans person, do you know how shitty trans healthcare is partly because nobody bothered doing any research because of the social taboos behind it?”
“We think of them as bad examples, but in a weird way they actually demonstrate that we can intervene in technological development; we can slow things down and we can redirect things,” they said, adding that our objective shouldn’t only be to root out the already existing technologies that prove harmful, but challenge the very way we approach, research and develop technology in the first place.
“I think it’s possible,” they finally said, “because, well, if changing how people do things isn’t possible then the technology industry isn’t shit, because that’s what they claim they’ve been doing this whole time. Like, you’re telling me that your app can disrupt society beyond recognition, but also your software developers’ workflow is immutable and cannot be changed? One of those two things is false.”
This article was published as part of an ongoing content partnership between FairPlanet and Screen Shot Media.
Image: Delta News Hub.
By copying the embed code below, you agree to adhere to our republishing guidelines.