
Credit: Tumisu
Microsoft recently announced that it will cease selling facial analysis technology that can purportedly identify a person’s emotion based on images and videos. In addition, the company is limiting public access to parts of its facial recognition services, Azure Face, with some features available only to Microsoft-managed partners and customers, and for only select types of use cases. This might come as a surprise to some, considering the unrestrictive approach to technology in general.
The move to limit access to facial recognition is part of Microsoft’s drive to build and deploy AI solutions responsibly. Microsoft, through its updated policies, stresses that principles and outcomes are to be held in higher regard than tools and practices to avoid issues on privacy and certain biases that jeopardize human welfare. Besides, facial expressions, when run through the filters of racial and certain biological attributes, do not always reflect a person’s true emotions and intent.
“Experts inside and outside the company have highlighted the lack of consensus on the definition of “emotions,” the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” Natasha Crampton, Chief Responsible AI Officer of Microsoft, said.
With this, Microsoft ceased providing open API access to technology that infers emotional states based on movements and facial expressions. New customers won’t have access to the aforementioned feature while existing customers will 0nly do so until June 30.
The privacy concerns mentioned by Crampton and the racial biases stemming from haphazard use have previously embroiled facial recognition technology in controversy, which in turn, exposed the lack of digital legislation protecting citizens from the misuse of facial recognition.
Why Facial Recognition Technology Is Controversial
Although recent facial recognition statistics show that the technology has over 99% accuracy, its accuracy in recognizing emotions goes down to 85%. This is still a high mark by all means. However, within the 15% gap lies a myriad of concerns. Moreover, the over 99% accuracy rate only occurs under ideal conditions, and racial and gender differences might play a role here.
A 2019 study revealed that many facial recognition algorithms produce false positives 10 to 100 times more frequently when assessing African-American and Asian faces, compared to Caucasian ones. African-American females have the highest rate of misidentification. Despite these alarming numbers, the United States police force continued to leverage the technology across the country, even using it to target protestors of the George Floyd case, which underscored police brutality and the force’s alleged racial biases.
Furthermore, the law barely has any teeth to forbid the indiscriminate use of facial recognition technology. In the US, the only law that affords people rights related to facial recognition data is the California Consumer Privacy Act. There are also direct facial recognition regulations in place, but they are only enacted in Portland and Baltimore. Thankfully, other laws have been passed in other US states, set to take effect in 2023.
The United Kingdom isn’t doing any better. In fact, it does not have specific legislation that centers on the indiscriminate use of facial recognition technology. On the other hand, the European Union passed a resolution that bars the police from using facial recognition technology in public places and from creating private databases hinged on the technology.
As a result of the controversy, many big tech companies, including IBM, Amazon, and Microsoft, stopped selling and developing facial recognition software for the police. IBM and most recently, Microsoft ceased selling the technology altogether while Google blocked 13 emotions from its own emotion-inferring tool, placed four under review, and has been selective in the facial recognition use cases it approves.
Leave a comment!