The tech industry doesn’t have a plan for dealing with bias in facial recognition
Publication/Creation Date
July 26 2018Description
Facial recognition is becoming part of the fabric of everyday life. You might already use it to log in to your phone or computer, or authenticate payments with your bank. In China, where the technology is more common, your face can be used to buy fast food, or claim your allowance of toilet paper at a public restroom. And this is to say nothing of how law enforcement agencies around the world are experimenting with facial recognition as tool of mass surveillance.
But the widespread uptake of this technology belies underlying structural problems, not least the issue of bias. By this, researchers mean that software used for facial identification, recognition, or analysis performs differently based on the age, gender, and ethnicity of the person it’s identifying.Keywords
Bias,
Discrimination,
Gender,
Racism,
Ethics,
Critical Thinking,
Surveillance,
Privacy,
Social Issues,
Law,
Controversy,
Diversity,
Accuracy,
Crime,
Public Space,
Government,
Peoples Of Color,
EthnicitySource
https://www.theverge.com/2018/7/26/17616290/facial-recognition-ai-bias-benchmark-test
Date archived
October 15 2018Last edited
December 12 2020