With the Kinect discontinued and developers having no place to go for the above mentioned needs. Is Intel looking at supporting these features in the upcoming SDK? I know OpenCV has support for them, but almost all the algorithm use 2D datasets for tracking purposes. What use would a depth camera have with such solutions?
For the foreseeable future, such features will be provided by third-party systems such as OpenCV and ROS rather than developed by Intel themselves. SDK 2.0 is designed to be an open-source ecosystem where developers can share their own solutions and flexibly integrate a range of different platforms into their projects.