Is Copilot the best AI companion out there? Help us find out by answering a couple of quick questions!
Artificial intelligence holds the future of technology, but in the wrong hands, it can cause trouble for many. This is why Microsoft announced some changes in its Responsible AI Standard, where it introduced a new Limited Access policy. In some posts on June 21, the company says that it will now remove and limit access to some of its services using AI. In particular, these policy changes will affect Microsoft’s Azure Face facial recognition service and Custom Neural Voice.
One of the features that will be greatly affected by this change is the controversial facial analysis tech designed to infer an individual’s emotional state and recognize different human attributes, such as gender, age, smile, facial hair, hair, and makeup. Microsoft says that the move came after some concerns about privacy and the lack of scientific consensus regarding the definition of the concept of “emotion.”
“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs,” Azure AI Principal Group Product Manager Sarah Bird stated. “In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of “emotions,” and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.”
As such, the company announced that it already made the attribute detection capability unavailable on June 21 for new customers, while it will be discontinued for current customers on the 30th of the same month. Nonetheless, Microsoft says it “recognizes these capabilities can be valuable when used for a set of controlled accessibility scenarios.” With this, it is making an exception to continuously offer these capabilities in applications designed for people with disabilities, like Seeing AI.
On the other hand, Microsoft will limit access to its facial recognition services in Azure Face API, Computer Vision, and Video Indexer to those who will apply for them. Under the Limited Access policy Microsoft introduced, customers will have to pass the use case and customer eligibility requirements in order to get access to the operations. Current customers will have a year (until June 30, 2023) to apply and be approved for continued facial recognition services. Meanwhile, other facial detection capabilities used for detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box will remain available without application. According to Bird, this new process will help Microsoft “add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit.”
Lastly, Microsoft’s Custom Neural Voice feature will also face some restrictions to protect it against possible misuse. “Building upon what we learned from Custom Neural Voice, we will apply similar controls to our facial recognition services,” Chief Responsible AI Officer Natasha Crampton writes in a separate blog post. “After a transition period for existing customers, we are limiting access to these services to managed customers and partners, narrowing the use cases to pre-defined acceptable ones, and leveraging technical controls engineered into the services.”