【问题标题】:Calculating distance between 2 homography planes that have shared ground plane计算具有共享地平面的 2 个单应平面之间的距离
【发布时间】:2015-07-04 00:02:31
【问题描述】:

我认为最简单的方法是用图像来解释问题:

我有两个立方体(大小相同)放在桌子上。其中一侧标有绿色(便于跟踪)。我想以立方体大小为单位计算左立方体与右立方体(图片上的红线)的相对位置(x,y)。

有可能吗?我知道如果这两个绿色面有共同的平面,问题会很简单——比如立方体的顶部,但是我不能用它来跟踪。我只会计算一个正方形的单应性并与其他立方体角相乘。

我是否应该通过乘以 90 度旋转矩阵来“旋转”单应矩阵以获得“地面”单应矩阵?我计划在智能手机场景中进行处理,所以陀螺仪、相机内在参数可能具有任何价值。

【问题讨论】:

  • 您必须将相机校准到地平面...要做到这一点,您必须有地平面的 4 个已知点。如果您想以立方体大小为单位进行测量,最容易知道像素坐标中位于地平面上的 4 个立方体点。不幸的是,您在图像中看到了 3。也许你可以大约。 1 或仅在顶平面上使用 4...

标签: opencv image-processing distance homography


【解决方案1】:

这是可能的。 让我们假设(或声明)该表是 z=0 平面,并且您的第一个框位于该平面的原点。这意味着左框的绿色角具有(表格)坐标(0,0,0),(1,0,0),(0,0,1)和(1,0,1)。 (您的盒子尺寸为 1)。 您还拥有这些点的像素坐标。如果您将这些 2d 和 3d 值(以及相机的内在函数和失真)提供给 cv::solvePnP,您将获得相机与框(和平面)的相对 Pose。

在下一步中,您必须将桌面与从相机中心穿过第二个绿色框的右下角像素的光线相交。这个交点看起来像 (x,y,0) 和 [x-1,y] 将在你的盒子的右角之间平移。

【讨论】:

    【解决方案2】:

    如果您拥有所有信息(相机内在函数),您可以按照 FooBar 回答的方式进行操作。

    但是您可以通过单应性更直接地使用点位于平面上的信息(无需计算光线等):

    计算图像平面和地平面之间的单应性。 不幸的是,您需要 4 个点对应,但图像中只有 3 个立方体点可见,与地平面接触。 相反,您可以使用立方体的顶平面,可以测量相同的距离。

    首先是代码:

    int main()
    {
        // calibrate plane distance for boxes
        cv::Mat input = cv::imread("../inputData/BoxPlane.jpg");
    
    
        // if we had 4 known points on the ground plane, we could use the ground plane but here we instead use the top plane
        // points on real world plane: height = 1: // so it's not measured on the ground plane but on the "top plane" of the cube
        std::vector<cv::Point2f> objectPoints;  
        objectPoints.push_back(cv::Point2f(0,0)); // top front
        objectPoints.push_back(cv::Point2f(1,0)); // top right
        objectPoints.push_back(cv::Point2f(0,1)); // top left
        objectPoints.push_back(cv::Point2f(1,1)); // top back
    
        // image points:
        std::vector<cv::Point2f> imagePoints;
        imagePoints.push_back(cv::Point2f(141,302));// top front
        imagePoints.push_back(cv::Point2f(334,232));// top right
        imagePoints.push_back(cv::Point2f(42,231)); // top left
        imagePoints.push_back(cv::Point2f(223,177));// top back
    
        cv::Point2f pointToMeasureInImage(741,200); // bottom right of second box
    
    
        // for transform we need the point(s) to be in a vector
        std::vector<cv::Point2f> sourcePoints;
        sourcePoints.push_back(pointToMeasureInImage);
        //sourcePoints.push_back(pointToMeasureInImage);
        sourcePoints.push_back(cv::Point2f(718,141));
        sourcePoints.push_back(imagePoints[0]);
    
    
        // list with points that correspond to sourcePoints. This is not needed but used to create some ouput
        std::vector<int> distMeasureIndices;
        distMeasureIndices.push_back(1);
        //distMeasureIndices.push_back(0);
        distMeasureIndices.push_back(3);
        distMeasureIndices.push_back(2);
    
    
        // draw points for visualization
        for(unsigned int i=0; i<imagePoints.size(); ++i)
        {
            cv::circle(input, imagePoints[i], 5, cv::Scalar(0,255,255));
        }
        //cv::circle(input, pointToMeasureInImage, 5, cv::Scalar(0,255,255));
        //cv::line(input, imagePoints[1], pointToMeasureInImage, cv::Scalar(0,255,255), 2);
    
        // compute the relation between the image plane and the real world top plane of the cubes
        cv::Mat homography = cv::findHomography(imagePoints, objectPoints);
    
    
    
        std::vector<cv::Point2f> destinationPoints;
        cv::perspectiveTransform(sourcePoints, destinationPoints, homography);
    
        // compute the distance between some defined points (here I use the input points but could be something else)
        for(unsigned int i=0; i<sourcePoints.size(); ++i)
        {
            std::cout << "distance: " << cv::norm(destinationPoints[i] - objectPoints[distMeasureIndices[i]]) << std::endl; 
    
            cv::circle(input, sourcePoints[i], 5, cv::Scalar(0,255,255));
            // draw the line which was measured
            cv::line(input, imagePoints[distMeasureIndices[i]], sourcePoints[i], cv::Scalar(0,255,255), 2);
        }
    
    
        // just for fun, measure distances on the 2nd box:
        float distOn2ndBox = cv::norm(destinationPoints[0]-destinationPoints[1]);
        std::cout << "distance on 2nd box: " << distOn2ndBox << " which should be near 1.0" << std::endl;
        cv::line(input, sourcePoints[0], sourcePoints[1], cv::Scalar(255,0,255), 2);
    
    
        cv::imshow("input", input);
        cv::waitKey(0);
        return 0;
    }
    

    这是我要解释的输出:

    distance: 2.04674
    distance: 2.82184
    distance: 1
    distance on 2nd box: 0.882265 which should be near 1.0
    

    这些距离是:

    1. the yellow bottom one from one box to the other
    2. the yellow top one
    3. the yellow one on the first box
    4. the pink one
    

    因此(您要求的)红线的长度应接近 2 x 立方体边长。但是如您所见,我们有一些错误。

    在单应性计算之前,您的像素位置越好/越正确,您的结果就越准确。

    你需要一个针孔相机模型,所以不要扭曲你的相机(在现实世界的应用中)。

    请记住,如果您有 4 个可见的线性点(不在同一条线上),您可以计算地平面上的距离!

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2012-10-31
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2017-08-30
      • 2015-12-02
      • 2016-04-09
      • 2019-02-22
      相关资源
      最近更新 更多