【问题标题】:How to align kinect's depth image with color image如何将 kinect 的深度图像与彩色图像对齐
【发布时间】:2011-10-14 06:06:03
【问题描述】:

Kinect 上的颜色和深度传感器生成的图像略微失准。我怎样才能改变它们以使它们排列整齐?

【问题讨论】:

    标签: c# kinect


    【解决方案1】:

    关键是调用'Runtime.NuiCamera.GetColorPixelCoordinatesFromDepthPixel'

    这是 Runtime 类的扩展方法。它返回一个 WriteableBitmap 对象。这个 WriteableBitmap 会随着新帧的到来而自动更新。所以它的使用非常简单:

        kinect = new Runtime();
        kinect.Initialize(RuntimeOptions.UseColor | RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseDepthAndPlayerIndex);
        kinect.DepthStream.Open(ImageStreamType.Depth, 2, ImageResolution.Resolution320x240, ImageType.DepthAndPlayerIndex);
        kinect.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
        myImageControl.Source = kinect.CreateLivePlayerRenderer(); 
    

    这是代码本身:

    public static class RuntimeExtensions
    {
       public static WriteableBitmap CreateLivePlayerRenderer(this Runtime runtime)
       {
          if (runtime.DepthStream.Width == 0)
             throw new InvalidOperationException("Either open the depth stream before calling this method or use the overload which takes in the resolution that the depth stream will later be opened with.");
          return runtime.CreateLivePlayerRenderer(runtime.DepthStream.Width, runtime.DepthStream.Height);
       }
       public static WriteableBitmap CreateLivePlayerRenderer(this Runtime runtime, int depthWidth, int depthHeight)
       {
          PlanarImage depthImage = new PlanarImage();
          WriteableBitmap target = new WriteableBitmap(depthWidth, depthHeight, 96, 96, PixelFormats.Bgra32, null);
          var depthRect = new System.Windows.Int32Rect(0, 0, depthWidth, depthHeight);
    
          runtime.DepthFrameReady += (s, e) =>
                {
                    depthImage = e.ImageFrame.Image;
                    Debug.Assert(depthImage.Height == depthHeight && depthImage.Width == depthWidth);
                };
    
          runtime.VideoFrameReady += (s, e) =>
                {
                    // don't do anything if we don't yet have a depth image
                    if (depthImage.Bits == null) return;
    
                    byte[] color = e.ImageFrame.Image.Bits;
    
                    byte[] output = new byte[depthWidth * depthHeight * 4];
    
                    // loop over each pixel in the depth image
                    int outputIndex = 0;
                    for (int depthY = 0, depthIndex = 0; depthY < depthHeight; depthY++)
                    {
                        for (int depthX = 0; depthX < depthWidth; depthX++, depthIndex += 2)
                        {
                            // combine the 2 bytes of depth data representing this pixel
                            short depthValue = (short)(depthImage.Bits[depthIndex] | (depthImage.Bits[depthIndex + 1] << 8));
    
                            // extract the id of a tracked player from the first bit of depth data for this pixel
                            int player = depthImage.Bits[depthIndex] & 7;
    
                            // find a pixel in the color image which matches this coordinate from the depth image
                            int colorX, colorY;
                            runtime.NuiCamera.GetColorPixelCoordinatesFromDepthPixel(
                                e.ImageFrame.Resolution,
                                e.ImageFrame.ViewArea,
                                depthX, depthY, // depth coordinate
                                depthValue,  // depth value
                                out colorX, out colorY);  // color coordinate
    
                            // ensure that the calculated color location is within the bounds of the image
                            colorX = Math.Max(0, Math.Min(colorX, e.ImageFrame.Image.Width - 1));
                            colorY = Math.Max(0, Math.Min(colorY, e.ImageFrame.Image.Height - 1));
    
                            output[outputIndex++] = color[(4 * (colorX + (colorY * e.ImageFrame.Image.Width))) + 0];
                            output[outputIndex++] = color[(4 * (colorX + (colorY * e.ImageFrame.Image.Width))) + 1];
                            output[outputIndex++] = color[(4 * (colorX + (colorY * e.ImageFrame.Image.Width))) + 2];
                            output[outputIndex++] = player > 0 ? (byte)255 : (byte)0;
                        }
                    }
                    target.WritePixels(depthRect, output, depthWidth * PixelFormats.Bgra32.BitsPerPixel / 8, 0);
                };
                return target;
            }
        }
    

    【讨论】:

    • 可悲的是,这个链接现在给我扔了一个黄色的死亡屏幕。但我正在研究你提到的方法
    • @Mr-Bell - 我已经用实际代码而不是链接更新了这篇文章
    • 这看起来可行。似乎调用 GetColorPixelCoordinatesFromDepthPixel 正在扼杀我的帧率。
    • 是否可以为少量校准角调用GetColorPixelCoordinatesFromDepthPixel,然后在您的代码中进行插值或外插?这些错位大多是仿射的吗?
    • @rwong,我不知道 - 这是个好问题。如果你把它作为一个单独的问题发布在这个网站上,我会投赞成票
    【解决方案2】:

    这样做的一种方法是假设颜色和深度图像具有相似的变化,并将两个图像(或它们的较小版本)进行交叉关联。

    【讨论】:

    • 彼得,这些文章很有趣。但是,我认为这种解决方案可能更具经验性。我认为它可能只是一个偏移或类似的东西
    • :-) 好的。我大概是想多了。 I've just been reading this sort of stuff...
    • 在出厂时,每个 kinect 设备都经过校准,相机之间的偏移量被烧录到设备的内存中。诀窍在于找到正确的 API 来利用这些数据。目前官方的 kinect sdk 只提供了一个这样的 api,但其他的正在考虑用于未来的版本
    • @Robert:感谢您提供的信息!听起来很有趣。 :-)
    猜你喜欢
    • 1970-01-01
    • 2014-11-05
    • 2015-04-04
    • 2014-08-17
    • 2014-03-17
    • 2019-10-31
    • 2014-11-17
    • 1970-01-01
    相关资源
    最近更新 更多