Microsoft announced today that it will limit customer access to its facial recognition technologies in the name of responsible AI. Furthermore, it will retire facial recognition capabilities that infer emotion state, gender, age, smile, facial hair, hair, and makeup for privacy reasons.
“We have updated our approach to facial recognition including adding a new Limited Access policy, removing AI classifiers of sensitive attributes, and bolstering our investments in fairness and transparency,” Microsoft’s Sarah Bird writes in the announcement post. “We continue to provide consistent and clear guidance on the responsible deployment of facial recognition technology and advocate for laws to regulate it, but there is still more we must do.”
Among the changes Microsoft announced today are:
Limiting access to the Azure Face API, Computer Vision, and Video Indexer. Going forward, customers will need to apply for access to the Azure Face API, Computer Vision, and Video Indexer. And existing customers who are using these capabilities have one year to apply and receive approval for continued access. “Limited access adds an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit,” Bird says. Some existing facial capabilities, like detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box, will remain generally available and do not require an application.
Retiring facial recognition capabilities. Facial analysis capabilities that try to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup, will be retired. “Emotion classification specifically … raised important questions about privacy,” Bird notes. And so detection of these attributes will no longer be available to new customers beginning June 21, 2022, and existing customers have until June 30, 2023, to discontinue use of these attributes before they are retired.
Updating Microsoft’s Responsible AI Standard. Microsoft today also announced what it calls “meaningful updates to its Responsible AI Standard,” its framework for building AI systems that “respect enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.” You can learn more here.
New tools for customers. “Microsoft is providing customers with new tools and resources to help evaluate how well the models are performing against their own data and to use the technology to understand limitations in their own deployments,” Bird explains. “Azure Cognitive Services customers can now take advantage of the open-source Fairlearn package and Microsoft’s Fairness Dashboard to measure the fairness of Microsoft’s facial verification algorithms on their own data—allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology.”
Updating transparency documentation. Microsoft has also updated its transparency documentation with guidance to help customers improve the accuracy and fairness of their systems by incorporating human review to detect and resolve cases of misidentification or other failures, by providing support to people who believe their results were incorrect, and by identifying and addressing fluctuations in accuracy due to variation in operational conditions.
Recognition Quality API. Finally, Microsoft is releasing a new Recognition Quality API that flags problems with lighting, blur, occlusions, or head angle in images that are submitted for facial verification. “We realized some errors that were originally attributed to fairness issues were caused by poor image quality,” Bird writes. “If the image someone submits is too dark or blurry, the model may not be able to match it correctly. We acknowledge that this poor image quality can be unfairly concentrated among demographic groups.” You can learn more here.