Monday , 20 May 2019
Breaking News
Home » News » Microsoft Reveals New AI That Can Detect Human Emotions

Microsoft Reveals New AI That Can Detect Human Emotions

If there’s anything that separates us humans from machines, it’s our feelings and emotions, which can’t be understood by machines. This is something that is often believed, since feelings are unconvertible to binary code. Until now that is. It seems Microsoft has found a way to prove most of us wrong with its Project Oxford. Thanks to Microsoft’s new Artificial Intelligence (AI) API, machines can sense how people are feeling by analyzing their photos.

How it Works

The team behind the project displayed the AI at Microsoft’s Future Decoded event on Wednesday. Similar to what Google did with TensorFlow, Microsoft has released the new AI API to the public. Developers can use the beta version of the API in their applications. The API will allow programs to recognize eight of the basic human emotions. The eight emotional states are anger, contempt, disgust, fear, happiness, neutral, sadness and surprise.

Microsoft’s current Chief Executive, Satya Nadella, spoke at the event that he wants new inputs and outputs to be bred so that they can pave way for new types of personal computers. The emotion-sensing tool is an example of what these new inputs could look like. With the ability to understand a user’s emotions, devices can learn new ways to interact with users.


Here’s how it works with the Emotion Detection

The tool is still in beta stage, so it’s clear there is room for improvement. There are quite a few more human emotions that haven’t even been added to the tool. Then there’s the problem of faking emotions. However, the basics are correct and there’s great potential here.

The Potential for Machine Learning AI

Head of Microsoft Research Cambridge, Chris Bishop, showcased how the new AI is capable of detecting multiple faces and different emotions at the same time. The demonstration also displayed how each emotion is registered on a scale of zero to one, with varying values being registered among the different emotion categories.

Microsoft gave several examples of what this new project could bring to the tech world. When implemented in a real-world context, marketers can use it to gauge customer reactions. Messaging apps can also utilize the technology to automatically send emotions as images by using smartphone cameras.

Project Oxford is not limited to human emotion reading. There are several other tools capable of understanding words, sounds or images and will be released to developers in the coming months. These tools include Spell Check, Video Analysis and Editing, Speaker Recognition, Custom Recognition Intelligent Services. All of the Face APIs like age detection, emotion detection, and facial hair detection will also be updated and released under the Project Oxford set of tools.

Microsoft hopes that developers unable to develop their own machine learning or artificial intelligence systems will use these tools to bring new features into their apps.


Here are some of the other tools the company announced today:

Spell check: Microsoft’s spell check tool, which became available on Wednesday, is based on machine learning, so it’s always getting smarter and learning new slang, brand names and common grammatical errors.

Video: Microsoft is making many of the same technology used in its Hyperlapse video app available to the public by the end of the year. Tools that do things like detect faces, track motion and image stabilization will be part of the API.

Voice recognition: This tool, which will be out by the end of the year, learns a user’s voice and then recognizes whether or not that’s who’s speaking. Microsoft says it doesn’t recommend app makers use this to replace traditional passwords, but it could be used as an added layer of security.

Custom Recognition Intelligent Services: This tool, known as CRIS, is only available to hand-selected beta testers for now, but will be made more broadly available sometime in the future. It’s designed to enable speech recognition in challenging environments, such as a noisy shopping center or with a child who doesn’t enunciate.


About Sүε∂ Sσнαιl Bυкнαяι

19 Aquarius ௹ ▽ A Levels ▽ Tech obsessed I'm always learning & love to help, say hi! "The Best way to Predict the Future is to Create it. "

Check Also

At Least 6 Dead, Dozens Injured After London Apartment Complex Consumed By Flames

Witnesses reported hearing screams and seeing people trapped in the high-rise. A fire broke out …

Leave a Reply