了解Kinect坐标映射(译文)
By robot-v1.0
本文链接 https://www.kyfws.com/games/understanding-kinect-coordinate-mapping-zh/
版权声明 本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!
- 5 分钟阅读 - 2209 个词 阅读量 0了解Kinect坐标映射(译文)
原文地址:https://www.codeproject.com/Articles/769608/Understanding-Kinect-Coordinate-Mapping
原文作者:Vangos Pterneas
译文由本站 robot-v1.0 翻译
前言
Understanding Kinect Coordinate Mapping
了解Kinect坐标映射 这是我在收到博客订阅者的良好反馈后发布的另一篇文章.似乎很多人在创建Kinect项目时有一个共同的问题:他们如何在颜色和深度流之上正确地投影数据.(This is another post I publish after getting some good feedback from my blog subscribers. Seems that a lot of people have a problem in common when creating Kinect projects: how they can properly project data on top of the color and depth streams.) 您可能知道,Kinect将一些传感器集成到单个设备中:(As you probably know, Kinect integrates a few sensors into a single device:)
- RGB彩色相机–版本1为640×480,版本2为1920×1080(An RGB color camera – 640×480 in version 1, 1920×1080 in version 2)
- 深度传感器– v1中为320×240,v2中为512×424(A depth sensor – 320×240 in v1, 512×424 in v2)
- 红外传感器– v2中的512×424(An infrared sensor – 512×424 in v2) 这些传感器具有不同的分辨率,并且排列不完全,因此它们的视区不同.例如,很明显,RGB摄像机比深度摄像机和红外摄像机覆盖的区域更大.此外,从一个摄像机可见的元素可能在其他摄像机看不见.不同传感器在同一区域的观看方式如下:(These sensors have different resolutions and are not perfectly aligned, so their view areas differ. It is obvious, for example, that the RGB camera covers a wider area than the depth and infrared cameras. Moreover, elements visible from one camera may not be visible from the others. Here’s how the same area can be viewed by the different sensors:) 在这里观看视频(Watch video here) .(.)
一个例子(An Example)
假设我们要将人体关节投影在彩色图像的顶部.使用深度传感器执行身体跟踪,因此,身体点的坐标(X,Y,Z)仅与深度框正确对齐.如果您尝试将相同的身体关节坐标投影到颜色框的顶部,则会发现骨骼完全错位:(Suppose we want to project the human body joints on top of the color image. Body tracking is performed using the depth sensor, so the coordinates (X, Y, Z) of the body points are correctly aligned with the depth frame only. If you try to project the same body joint coordinates on top of the color frame, you’ll find out that the skeleton is totally out of place:)
坐标映射器(CoordinateMapper)
当然,Microsoft意识到这一点,因此SDK附带了一个方便的实用程序,名为(Of course, Microsoft is aware of this, so the SDK comes with a handy utility, named) CoordinateMapper
.(.) CoordinateMapper
的工作是确定3D空间中的一个点是否对应于彩色或深度2D空间中的一个点,反之亦然.(’s job is to identify whether a point from the 3D space corresponds to a point in the color or depth 2D space – and vice-versa.) CoordinateMapper
是的财产(is a property of the) KinectSensor
类,因此对于每个Kinect传感器实例来说都是紧密的.(class, so it is tight to each Kinect sensor instance.)
使用CoordinateMapper(Using CoordinateMapper)
让我们回到我们的例子.这是访问人体关节坐标的C#代码:(Let’s get back to our example. Here is the C# code that accesses the coordinates of the human joints:)
foreach (Joint joint in body.Joints)
{
// 3D coordinates in meters
CameraSpacePoint cameraPoint = joint.Position;
float x = cameraPoint.X;
float y = cameraPoint.Y;
float z = cameraPoint.Z;
}
注意:请参阅我以前的文章((Note: Please refer to my previous article () Kinect版本2:概述(Kinect version 2: Overview) )关于找到身体的关节.() about finding the body joints.)
坐标是3D点,打包成一个(The coordinates are 3D points, packed into a) CameraSpacePoint
结构.每(struct. Each) CameraSpacePoint
具有X,Y和Z值.这些值以(has X, Y and Z values. These values are measured in)米(meters).(.)
视觉元素的尺寸以像素为单位,因此我们需要以某种方式将真实世界的3D值转换为2D屏幕像素. Kinect SDK提供了两个附加功能(The dimensions of the visual elements are measured in pixels, so we somehow need to convert the real-world 3D values into 2D screen pixels. Kinect SDK provides two additional) struct
代表2D点:(s for 2D points:) ColorSpacePoint
和D(and D) epthSpacePoint
.(.)
使用(Using) CoordinateMapper
,转换一个(, it is super-easy to convert a) CameraSpacePoint
进入一个(into either a) ColorSpacePoint
或一个(or a) DepthSpacePoint
:(:)
ColorSpacePoint colorPoint = _sensor.CoordinateMapper.MapCameraPointToColorSpace(cameraPoint);
DepthSpacePoint depthPoint = _sensor.CoordinateMapper.MapCameraPointToDepthSpace(cameraPoint);
这样,一个3D点已映射到2D点,因此我们可以将其投影在彩色(1920×1080)和深度(512×424)位图的顶部.(This way, a 3D point has been mapped into a 2D point, so we can project it on top of the color (1920×1080) and depth (512×424) bitmaps.)
如何绘制关节?(How About Drawing the Joints?)
您可以使用(You can draw the joints using a) Canvas
元素,一个(element, a) DrawingImage
对象或您喜欢的任何对象.(object or whatever you prefer.)
这是您可以在关节上绘制关节的方法(This is how you can draw the joints on a) Canvas
:(:)
public void DrawPoint(ColorSpacePoint point)
{
// Create an ellipse.
Ellipse ellipse = new Ellipse
{
Width = 20,
Height = 20,
Fill = Brushes.Red
};
// Position the ellipse according to the point's coordinates.
Canvas.SetLeft(ellipse, point.X - ellipse.Width / 2);
Canvas.SetTop(ellipse, point.Y - ellipse.Height / 2);
// Add the ellipse to the canvas.
canvas.Children.Add(ellipse);
}
同样,您可以绘制一个(Similarly, you can draw a) DepthSpacePoint
在深度框上方.您还可以在两个点之间绘制骨骼(线).这是在彩色图像上方完美坐标映射的结果:(above the depth frame. You can also draw the bones (lines) between two points. This the result of a perfect coordinate mapping on top of the color image:)
注意:请参阅我以前的文章((Note: Please refer to my previous article () Kinect v2颜色,深度和红外流(Kinect v2 color, depth and infrared streams) ),以了解如何创建相机位图.() to learn how you can create the camera bitmaps.) 从GitHub下载源代码并尽情享受:(Download the source code from GitHub and enjoy yourself:)
- Kinect for Windows版本1,SDK 1.8(Kinect for Windows version 1, SDK 1.8)
- Kinect for Windows版本2,SDK 2.0(Kinect for Windows version 2, SDK 2.0)
在本教程中,我使用了Kinect for Windows版本2代码,但是,所有内容都适用于较早的传感器和SDK 1.8.这是您应该注意的相应的类和结构名称.如您所见,关于所使用的命名约定有一些小的更改,但是核心功能是相同的.(In this tutorial, I used Kinect for Windows version 2 code, however, everything applies to the older sensor and SDK 1.8 as well. Here are the corresponding class and struct names you should be aware of. As you can see, there are some minor changes regarding the naming conventions used, but the core functionality is the same.)
版本1(Version 1) 版本2(Version 2) SkeletonPoint
CameraSpacePoint
ColorImagePoint
ColorSpacePoint
DepthImagePoint
DepthSpacePoint
许可
本文以及所有相关的源代码和文件均已获得The Code Project Open License (CPOL)的许可。
Kinect 新闻 翻译