Microsoft restricts access to artificial intelligence technology

Microsoft has decided that it will no longer allow companies to use an emotional, gender, or age rating tool with facial recognition technology. The corporation also restricts access to other technologies that use artificial intelligence, including a synthesizer that allows you to create recordings resembling the voice of a real person.

Microsoft’s decisions are to result from the review of the ethics of using artificial intelligence (AI), called the “responsible AI standard”. As reported, the actions taken will cause that some of the functions available so far will be modified, while others will be completely withdrawn from sale.

Azure Face limited

It will be limited, inter alia, access to the Azure Face facial recognition tool. Until now, many companies, such as Uber, have used it as part of their user identity verification processes. According to the new rules, any company wishing to use the facial recognition feature will have to submit an application for its use. In the application, you will need to prove that the company meets Microsoft’s ethical standards for the use of AI and that the service will be used for the benefit of the end user and society.

At the same time, the possibility of using some of the more controversial features of Azure Face will be completely deleted. Microsoft will withdraw, among others technology that allows the assessment of emotional states and the determination of characteristics such as gender and age.

– We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology, and to reach trade-offs. When it comes to recognizing emotions in particular, these efforts have led us to important privacy questions, disagreement over the definition of “emotion” and the inability to generalize the link between facial expressions and emotional state, said Sarah Bird, Microsoft product manager, quoted by the UK Guardian.

READ MORE: Identifying veiled faces? Reports on new technology created in Israel

Deepfake

Microsoft will also limit access to Custom Neural Voice technology, i.e. the text-to-speech feature. This synthesizer allows you to create an artificial voice that sounds almost identical to the voice of a real selected person. This is because of concerns about using this technology to impersonate others and mislead audiences. “With the advancement of technology that makes synthetic speech indistinguishable from human voices, there is a risk of damaging deepfakes,” explains Microsoft’s Qinying Liao, quoted by the Guardian.

READ MORE: Technology that “revives” the dead, but also statues. Here’s a deep fake in a nostalgic version

However, the company does not completely give up the function of recognizing emotions. It will be used internally in accessibility tools such as Seeing AI. This tool tries to describe the world in words to the needs of users with vision problems.

READ MORE: Microsoft’s boss: war in Ukraine is the first major hybrid war

photo-source">Main photo source: VDB Photos / Shutterstock

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Never miss any important news. Subscribe to our newsletter.