Artificial intelligence now has the ability to guess whether people are gay or straight based on photos of their faces, a new study from Stanford University found. The research revealed that a computer algorithm could correctly distinguish between gay and straight men 81% of the time and 74% for women. Now, the results are raising questions about the biological origins of sexual orientation, the ethics of face-detection technology, and the potential for software to violate rights.
The machine intelligence tested in the research was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michael Kosinski and Yilun Wang, extracted features from the images using “deep neural networks” and set an algorithm based on wide datasets.
The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles.” The data also identified other, more significant trends. The gay men tended to have narrower jaws, longer noses, and larger foreheads than straight men. Gay women had larger jaws and smaller foreheads.
Human judges performed much worse than the algorithm, identifying orientation 61% of the time for men and 54% for women. Interestingly, when the software reviewed five images per person, it was even more successful—giving the correct orientation 91% of the time with men and 83% of women. From a very high-level perspective, that means “faces contain more information about sexual orientation than can be perceived and interpreted by the human brain.”
The researchers suggest that the findings provide “strong support” that sexual orientation could come from the exposure to certain hormones before birth—a definitive argument to support that people are born gay. It’s important to note that people of color were not included in the study, and there was no consideration of transgender or bisexual orientations.
Still, the implications for artificial intelligence are quite alarming. With how many images are stored on social media sites and in government databases, public data could be used to detect people’s sexual orientation without their permission.
“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”
In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and political views, psychological conditions, or personality.
“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”
Source: The Guardian