The content is from Techcrunch, the article rights & obligation belongs to Techcrunch. Enjoy reading!
Microsoft’s facial recognition tools just made some significant technological strides, though the timing probably couldn’t be worse.
On Tuesday, the company revealed in a blog post that its Face API, part of Azure Cognitive Services, can now identify men and women with darker skin far more successfully than previous iterations of the technology. The updates particularly improve the system’s recognition capabilities for women with darker skin tones, reducing error rates for darker-skinned men and women by as much as 20 times and reducing error rates for all women by nine times.
Microsoft stated that it was able to “significantly reduce accuracy differences across the demographics” by expanding facial recognition training data sets, initiating new data collection around the variables of skin tone, gender and age and improving its gender classification system by “focusing specifically on getting better results for all skin tones.”
“The higher error rates on females with darker skin highlights an industrywide challenge: Artificial intelligence technologies are only as good as the data used to train them. If a facial recognition system is to perform well across all people, the training dataset needs to represent a diversity of skin tones as well factors such as hairstyle, jewelry and eyewear.”
Microsoft notes that it incorporated bias training, spearheaded by Microsoft Senior Researcher Hanna Wallach, who specializes in AI fairness, accountability and transparency. Another senior researcher involved in the effort focuses on bias in training data that can result in biased systems, like the “underrepresentation of darker skinned women that may lead to AI systems with unacceptable error rates on gender classification tasks.”
While the eradication of bias in tech systems is a noble cause, the potential surveillance and policing applications of facial recognition in particular gives many critics pause. Microsoft is currently facing a backlash for its relationship with U.S. Immigration and Customs Enforcement (ICE), though the company opposed the border separation policy being enacted by the agency.
In January, Microsoft announced its intentions to move forward in contracting with ICE after securing an Authority to Operate (ATO) from the agency. The Face API within Azure Cognitive Services is part of a suite of tools offered in Azure Government contracts.
“This ATO is a critical next step in enabling ICE to deliver such services as cloud-based identity and access, serving both employees and citizens from applications hosted in the cloud,” Microsoft wrote in January.
“This can help employees make more informed decisions faster, with Azure Government enabling them to process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.”