Nick Steyer c436e6f69b Add ZED examples | %!s(int64=2) %!d(string=hai) anos | |
---|---|---|
.. | ||
Models | %!s(int64=2) %!d(string=hai) anos | |
Prefabs | %!s(int64=2) %!d(string=hai) anos | |
Ressources | %!s(int64=2) %!d(string=hai) anos | |
Scene | %!s(int64=2) %!d(string=hai) anos | |
Models.meta | %!s(int64=2) %!d(string=hai) anos | |
Prefabs.meta | %!s(int64=2) %!d(string=hai) anos | |
Readme.md | %!s(int64=2) %!d(string=hai) anos | |
Readme.md.meta | %!s(int64=2) %!d(string=hai) anos | |
Ressources.meta | %!s(int64=2) %!d(string=hai) anos | |
Scene.meta | %!s(int64=2) %!d(string=hai) anos |
These scripts and sample demonstrate how to use the ZED SDK's Body tracking module in Unity. Note that Body tracking requires a ZED2, not a ZED or ZED Mini.
The ZED SDK Body tracking module uses a highly-optimized AI model to detect and track human body skeletons in real time. Using depth, it goes a step further than similar algorithms to calculate the object’s 3D position in the world, not just within the 2D image.
When Object Detection is running, the ZED will regularly output a Frame object, received in Unity as an ObjectsFrameSDK object. In it is some metadata and a list of individual object detections, contained in an array of ObjectDataSDK.
Each detection holds the following info:
This sample is using the joints orientation sent by the ZED SDK and apply them to the corresponding bones of a 3D Model. This way, it can be animated in real-time.
Hardware-wise, you'll want your ZED2 pointed so that a significant part of the floor is visible. This is because the ZED SDK will detect the floor when it starts, and uses it to better understand the position and movement of objects in 3D.
To make your own scene without doing any scripting, do the following:
You can as well use your own 3D model in the Body tracking sample scene.
In ZEDManager's Inspector, there is a section dedicated to Object Detection/Skeleton tracking. In it are two categories of parameters: Init and Runtime. Init parameters must be set before Object Detection is started. Runtime parameters can be adjusted whenever you want, and will be applied instantly.
Note the "Start Object Detection" button at the end. Object Detection requires a lot of performance, so there is not an option in ZEDManager to start it automatically. To start it, you must either press that button yourself, or have a script start it. The ZEDSkeletonTrackingViewer script will do this for you, provided this option is checked in the Inspector.
If you're writing your own scripts to make use of Object Detection data, there are several ways of getting to it.
First, it's worth noting that the data directly from the SDK was not designed for use within Unity. This data is stored in the ObjectsFrameSDK and ObjectDataSDK structs. Many values in these structs require additional transforms to be useful in Unity. For example, many 3D coordinates are not in world space, and 2D coordinates have their Y values inverted from Unity convention. Being structs, they also lack helper functions. However, as these structs are very close to how they are presented in the C++ ZED SDK, they can be useful for understanding how the object detection works on the inside.
It is almost always better to use the DetectionFrame and DetectedObject classes. These are abstracted versions of ObjectsFrameSDK and ObjectDataSDK that provide access to the data in forms much more aligned with Unity standard practice, along with additional helper functions that give you options for presenting/transforming the data. Each one also has a reference to the original ObjectsFrameSDK or ObjectDataSDK object it was created from, so you can always get the "original" data from these classes if you need.
You get the Object Detection data from ZEDManager and can access it once Object Detection has been started. Your options are: