Kinect is back. Yes.
After ending production of the original Kinect hardware for Xbox One last year and the Kinect adapter for the Xbox One earlier this year, no one really expected Kinect to make a comeback. Microsoft brought most of the tech from Kinect to HoloLens, but now it’s bringing some of Kinect’s industry-defining tech to hardware developers too.
At Build 2018 today, Microsoft CEO Satya Nadella unveiled Project Kinect for Azure. Microsoft’s latest stab at keeping Kinect alive involves a package of sensors with onboard AI. The system is powered by Microsoft’s next-gen depth camera and its “industry-defining” Time of Flight sensor in a small, power-efficient form factor.
The product is powered by Azure AI and it can be used by hardware developers to integrate AI on the intelligent edge with their products, offering such features as real-time hand tracking, high-fidelity spatial mapping, and more. In addition to the new Project Kinect for Azure, Microsoft is opening up its other applications for edge devices. It’s open-sourcing the Azure IoT Edge runtime, and it’s also bringing Azure Cognitive Services to IoT Edge, starting with Custom Vision.
Elsewhere, Microsoft is bringing its speech recognition tech to developers with the new Speech Devices SDK. This new software development kit will enable developers to utilize Microsoft’s speech recognition tech on their products, which will allow for improved speech recognition thanks to its noise cancellation and far-field voice tech.
This year’s Build 2018 introduces lots of new Azure and AI products that are aimed at edge devices and developers that build for edge computing. What I am really interested to see, however, is how developers make use of Microsoft’s Project Kinect for Azure on their products.