For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive – and should not be sold.
Acknowledging some of those criticisms, Microsoft said on Tuesday that it planned to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will be stopped available to new users this week, and will be phased out for existing users within the year.
The changes are part of a push by Microsoft for tighter controls of its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that sets out the requirements for AI systems to ensure they are not going to have a harmful impact on society.
The requirements include that systems provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographic groups, including marginalized groups.”
Before they are released, technologies that will be used to make important decisions about a person’s access to employment, education, health care, financial services or a life opportunity are subject to a review led by a team led by Natasha Crampton, Microsoft’s Chief Responsible AI Officer General Chat Chat Lounge
There were heightened concerns at Microsoft around the emotion recognition tool, which labeled someone’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.
“There is a huge amount of cultural and geographic and individual variation in the way we express ourselves,” Ms. Crampton said. That led to reliability concerns, as well as the larger questions of “whether facial expression is a reliable indicator of your internal emotional state,” she said.
Read More on Artificial Intelligence
The age and gender analysis tools are being eliminated – along with other tools to detect facial features such as hair and smile – could be useful for interpreting visual images for blind or low-vision people, for example, but the company decided it was problematic to make. The profiling tools are generally available to the public, Ms. Crampton said.
In particular, she added, the system’s so-called gender classifier was binary, “and that is not consistent with our values.”
Microsoft will also put new controls on its face recognition feature, which can be used to perform identity checks or search for a particular person. Uber, for example, uses the software in its app to verify that a driver’s face matches the ID on the file for that driver’s account. Software developers who want to use Microsoft’s facial recognition tool will need to apply for access and explain how they plan to deploy it.
Users will also be required to apply and explain how they will use other potentially abusive AI systems, such as custom Neural Voice. The service can generate a human voice print, based on a sample of someone’s speech, so that authors, for example, can create synthetic versions of their voice to read in their audiobooks in languages they do not speak.
Because of the possible misuse of the tool – to create the impression that people have said things they do – speakers must go through a series of steps to confirm that their voice is authorized, and that recordings include watermarks detectable by Microsoft. General Chat Chat Lounge
“We’re taking concrete steps to live up to our AI principles,” said Ms. Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the Ethical AI group in 2018. “It’s going to be a huge journey.”
Microsoft, like other technology companies, has had stumbles with its artificially intelligent products. In 2016, it released a chatbot on Twitter, called Tay, that was designed to learn “conversational understanding” from the users it interacted with. The bot quickly started spouting racist and offensive tweets, and Microsoft had to take it down.
In 2020, researchers discovered that speech-to-text tools developed by Microsoft, Apple, Google, IBM and Amazon worked less well for Black people. Microsoft’s system was the best of the bunch but misidentified 15 percent of words for white people, compared with 27 percent for black people.
The company had integrated diverse speech data to train its AI system but hadn’t understood just how diverse the language could be. So it hired a sociolinguistics expert From the University of Washington to explain the language varieties that Microsoft needs to know. It went beyond demographics and regional variety in how people speak in formal and informal settings.
“Thinking about race as a determining factor of how someone is actually a bit misleading,” Ms. Crampton said. “What we’ve learned in consultation with the expert is that there is actually a huge range of factors affecting the linguistic variety.”
Ms. Crampton said the journey to fix that speech-to-text disparity had helped inform guidance set out in the company’s new standards.
“This is a critical norm-setting period for AI,” she said, pointing to Europe’s proposed regulation setting rules and restrictions on the use of artificial intelligence. “We hope to be able to use our standards and contribute to the bright, necessary discussion that needs to be made about the standards that technology companies should hold.”
A vibrant debate about the potential harms of AI has been underway for years in the technology community, fueled by mistakes and errors that have real consequences on people’s lives, as algorithms that determine whether or not people gain welfare benefits. Dutch tax authorities mistakenly took child care benefits away from needy families when a flawed algorithm penalized people with dual nationality.
Automated software for recognizing and analyzing faces has been particularly controversial. Last year, Facebook shut down its decades-old system for identifying people in photos. The company’s vice president of artificial intelligence cited the “many concerns about the place’s facial recognition technology in society.”
Several Black men have been wrongly arrested after flawed facial recognition matches. And in 2020, at the same time as the Black Lives Matter protests after the killing of George Floyd in Minneapolis, Amazon and Microsoft continue to use moratorium products by their facial recognition products in the United States, saying clearer lawsuits use were needed.
Since then, Washington and Massachusetts have passed regulation requiring, among other things, judicial oversight over police use of facial recognition tools.
Ms. Crampton said Microsoft had considered making its software available to police in lawsuits over lawsuits but had decided, for now, not to do so. She said that could change as the legal landscape changed.
Arvind Narayanan, a Princeton computer science professor and prominent AI expert, said companies may be stepping back from analyzing technologies that face them because they were “more visceral, as opposed to various other types of AI that might be dubious but that we don’t definitely feel in our bones. “
Companies also realize that, at least for the moment, some of these systems are not that commercially valuable, he said. Microsoft couldn’t say how many users had it for the facial analysis features it’s getting rid of. Mr. Narayanan predicted that companies would be less likely to abandon other invasive technologies, such as targeted advertising, which profiles people who choose to show their best ads because they were a “cash cow.”