The tech industry doesn’t have a plan for dealing with bias in facial recognition


Publication Title
The Verge
Publication/Creation Date
July 26 2018
Creators/Contributors
James Vincent (creator)
Joy Buolamwini (contributor)
Clare Garvie (contributor)
Ruchir Puri (contributor)
Elke Oberg (contributor)
Patrick Grother (contributor)
Brian Brackeen (contributor)
Massachusetts Institute Of Technology (MIT) (contributor)
MIT Media Lab (contributor)
Georgetown Law School (contributor)
Google Inc. (contributor)
IBM (contributor)
Amazon, Inc. (contributor)
Cognitec (contributor)
National Institute Of Standards And Technology (NIST) (contributor)
Kairos (contributor)
Microsoft (contributor)
Persuasive Intent
Information
Description
Facial recognition is becoming part of the fabric of everyday life. You might already use it to log in to your phone or computer, or authenticate payments with your bank. In China, where the technology is more common, your face can be used to buy fast food, or claim your allowance of toilet paper at a public restroom. And this is to say nothing of how law enforcement agencies around the world are experimenting with facial recognition as tool of mass surveillance.

But the widespread uptake of this technology belies underlying structural problems, not least the issue of bias. By this, researchers mean that software used for facial identification, recognition, or analysis performs differently based on the age, gender, and ethnicity of the person it’s identifying.
HCI Platform
Ambient
Location on Body
Not On The Body
Source
https://www.theverge.com/2018/7/26/17616290/facial-recognition-ai-bias-benchmark-test