But the darker the skin, the more errors arise — up to nearly 35 per cent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.
These disparate results, calculated by Joy Buolamwini, a researcher at the MIT Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.
In modern artificial intelligence, data rules. AI software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women.
One widely used facial-recognition data set was estimated to be more than 75 per cent male and more than 80 per cent white, according to another research study.
The new study also raises broader questions of fairness and accountability in artificial intelligence at a time when investment in and adoption of the technology is racing ahead.
Today, facial recognition software is being deployed by companies in various ways, including to help target product pitches based on social media profile pictures. But companies are also experimenting with face identification and other AI technology as an ingredient in automated decisions with higher stakes like hiring and lending.
Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement — and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.
Facial recognition technology is lightly regulated so far.
“This is the right time to be addressing how these AI systems work and where they fail — to make them socially accountable,” said Suresh Venkatasubramanian, a professor of computer science at the University of Utah.
Until now, there was anecdotal evidence of computer vision miscues, and occasionally in ways that suggested discrimination. In 2015, for example, Google had to apologise after its image-recognition photo app initially labelled African Americans as “gorillas.”
Sorelle Friedler, a computer scientist at Haverford College and a reviewing editor on Buolamwini’s research paper, said experts had long suspected that facial recognition software performed differently on different populations.
“But this is the first work I’m aware of that shows that empirically,” Friedler said.
Buolamwini, a young African-American computer scientist, experienced the bias of facial recognition firsthand. When she was an undergraduate at the Georgia Institute of Technology, programs would work well on her white friends, she said, but not recognise her face at all. She figured it was a flaw that would surely be fixed before long.
But a few years later, after joining the MIT Media Lab, she ran into the missing-face problem again. Only when she put on a white mask did the software recognise hers as a face.
By then, face recognition software was increasingly moving out of the lab and into the mainstream. “OK, this is serious,” she recalled deciding then. “Time to do something.”
So she turned her attention to fighting the bias built into digital technology. Now 28 and a doctoral student, after studying as a Rhodes scholar and a Fulbright fellow, she is an advocate in the new field of “algorithmic accountability,” which seeks to make automated decisions more transparent, explainable and fair.
Her short TED Talk on coded bias has been viewed more than 940,000 times, and she founded the Algorithmic Justice League, a project to raise awareness of the issue.
In her newly published paper, which will be presented at a conference this month, Buolamwini studied the performance of three leading face recognition systems — by Microsoft, IBM and Megvii of China — by classifying how well they could guess the gender of people with different skin tones. These companies were selected because they offered gender classification features in their facial analysis software — and their code was publicly available for testing.
She found them all wanting.
To test the commercial systems, Ms. Buolamwini built a data set of 1,270 faces, using faces of lawmakers from countries with a high percentage of women in office. The sources included three African nations with predominantly dark-skinned populations, and three Nordic countries with mainly light-skinned residents.
The African and Nordic faces were scored according to a six-point labelling system used by dermatologists to classify skin types. The medical classifications were determined to be more objective and precise than race.
Then, each company’s software was tested on the curated data, crafted for gender balance and a range of skin tones. The results varied somewhat. Microsoft’s error rate for darker-skinned women was 21 per cent, while IBM’s and Megvii’s rates were nearly 35 per cent. They all had error rates below 1 percent for light-skinned males.
Buolamwini shared the research results with each of the companies. IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparent” services. This month, the company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women.
Microsoft said that it had “already taken steps to improve the accuracy of our facial recognition technology” and that it was investing in research “to recognise, understand and remove bias.” ©2018 The New York Times News Service
To read the full story, Subscribe Now at just Rs 249 a month
Already a subscriber? Log in
Subscribe To BS Premium
₹249
Renews automatically
₹1699₹1999
Opt for auto renewal and save Rs. 300 Renews automatically
₹1999
What you get on BS Premium?
- Unlock 30+ premium stories daily hand-picked by our editors, across devices on browser and app.
- Pick your 5 favourite companies, get a daily email with all news updates on them.
- Full access to our intuitive epaper - clip, save, share articles from any device; newspaper archives from 2006.
- Preferential invites to Business Standard events.
- Curated newsletters on markets, personal finance, policy & politics, start-ups, technology, and more.
Need More Information - write to us at assist@bsmail.in