


For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and should not be sold.
Microsoft said this week that it planned to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will stop being available to new users this month and will be phased out for existing users within the year.
The changes are part of a push by Microsoft for tighter controls of its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that sets out requirements for AI systems to ensure they are not going to have a harmful impact on society.
The requirements include ensuring that systems provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographic groups, including marginalized groups.”
Before they are released, technologies that would be used to make important decisions about a person’s access to jobs, education, health care, financial services or a life opportunity are subject to a review by a team led by Natasha Crampton, Microsoft’s chief responsible AI officer.
There were heightened concerns at Microsoft around the emotion recognition tool, which labeled someone’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.
“There’s a huge amount of cultural and geographic and individual variation in the way in which we express ourselves,” Crampton said. That led to reliability concerns, along with whether “facial expression is a reliable indicator of your internal emotional state,” she said.
The age and gender analysis tools being eliminated — along with other tools to detect facial attributes — could be useful to interpret visual images for blind or low-vision people, for example, but the company decided it was problematic to make the profiling tools generally available to the public, Crampton said.
In particular, she added, the system’s so-called gender classifier was binary, “and that’s not consistent with our values.”
Microsoft will also put new controls on its face recognition feature, which can be used to perform identity checks or search for a particular person. Uber uses the software in its app to verify that a driver’s face matches the ID on file for that driver’s account. Software developers who want to use Microsoft’s facial recognition tool will need to apply for access and explain how they plan to deploy it.
Users will also be required to apply and explain how they will use other potentially abusive AI systems, such as Custom Neural Voice. The service can generate a human voice print, based on a sample of someone’s speech, so that authors, for example, can create synthetic versions of their voice to read their audiobooks in languages they don’t speak.