【问题标题】:Augmented Reality iOS application tracking issue增强现实 iOS 应用程序跟踪问题
【发布时间】:2016-01-26 12:28:14
【问题描述】:

我能够在屏幕上检测标记、识别标记和初始化 OpenGL 对象。我遇到的问题是将它们覆盖在相机世界中标记位置的顶部。我的相机校准得最好,我可以使用这种方法Iphone 6 camera calibration for OpenCV。我觉得我的相机投影矩阵有问题,我按如下方式创建它:

-(void)buildProjectionMatrix:
     (Matrix33)cameraMatrix: 
     (int)screen_width:  
     (int)screen_height: 
     (Matrix44&) projectionMatrix
 {
 float near = 0.01;  // Near clipping distance
 float far  = 100;  // Far clipping distance

// Camera parameters
float f_x = cameraMatrix.data[0]; // Focal length in x axis
float f_y = cameraMatrix.data[4]; // Focal length in y axis 
float c_x = cameraMatrix.data[2]; // Camera primary point x
float c_y = cameraMatrix.data[5]; // Camera primary point y

std::cout<<"fx "<<f_x<<" fy "<<f_y<<" cx "<<c_x<<" cy "<<c_y<<std::endl;
std::cout<<"width "<<screen_width<<" height "<<screen_height<<std::endl;

projectionMatrix.data[0] = - 2.0 * f_x / screen_width;
projectionMatrix.data[1] = 0.0;
projectionMatrix.data[2] = 0.0;
projectionMatrix.data[3] = 0.0;

projectionMatrix.data[4] = 0.0;
projectionMatrix.data[5] = 2.0 * f_y / screen_height;
projectionMatrix.data[6] = 0.0;
projectionMatrix.data[7] = 0.0;

projectionMatrix.data[8] = 2.0 * c_x / screen_width - 1.0;
projectionMatrix.data[9] = 2.0 * c_y / screen_height - 1.0;
projectionMatrix.data[10] = -( far+near ) / ( far - near );
projectionMatrix.data[11] = -1.0;

projectionMatrix.data[12] = 0.0;
projectionMatrix.data[13] = 0.0;
projectionMatrix.data[14] = -2.0 * far * near / ( far - near );
projectionMatrix.data[15] = 0.0;
}

这是估计标记位置的方法:

void MarkerDetector::estimatePosition(std::vector<Marker>& detectedMarkers)
{
for (size_t i=0; i<detectedMarkers.size(); i++)
{                   
    Marker& m = detectedMarkers[i];

    cv::Mat Rvec;
    cv::Mat_<float> Tvec;
    cv::Mat raux,taux;
    cv::solvePnP(m_markerCorners3d, m.points, camMatrix, distCoeff,raux,taux);
    raux.convertTo(Rvec,CV_32F);
    taux.convertTo(Tvec ,CV_32F);

    cv::Mat_<float> rotMat(3,3); 
    cv::Rodrigues(Rvec, rotMat);

    // Copy to transformation matrix
    for (int col=0; col<3; col++)
    {
        for (int row=0; row<3; row++)
        {        
            m.transformation.r().mat[row][col] = rotMat(row,col); // Copy rotation component
        }
        m.transformation.t().data[col] = Tvec(col); // Copy translation component
    }

    // Since solvePnP finds camera location, w.r.t to marker pose, to get marker pose w.r.t to the camera we invert it.
    m.transformation = m.transformation.getInverted();
}
}

OpenGL 形状能够跟踪和解释大小和旋转,但翻译出现问题。如果相机旋转 90 度,opengl 形状会围绕标记中心摆动 90 度。这几乎就像我在旋转之前翻译,但我不是。

查看视频以了解问题:

https://vid.me/fLvX

【问题讨论】:

  • afaik Since solvePnP finds camera location, w.r.t to marker pose, to get marker pose w.r.t to the camera we invert it. 是错误的。相反,solvePnP 会为恒定的相机姿势找到对象位置。

标签: ios opencv opengl-es computer-vision augmented-reality


【解决方案1】:

我猜你在投影 3-D 模型点时可能会遇到一些问题。本质上,solvePnP 给出了一个转换,将点从模型坐标系带到相机坐标系,这由一个旋转和平移向量组成(solvePnP 的输出):

cv::Mat rvec, tvec;
cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec)

此时您可以将模型点投影到图像平面上

std::vector<cv::Vec2d> imagePointsRP; // Reprojected image points
cv::projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs, imagePointsRP);

现在,您应该只在传入图像上绘制imagePointsRP 的点,如果姿势估计正确,那么您将在标记的角上看到重新投影的角

无论如何,模型到相机和相机到模型方向的矩阵可以组成如下:

cv::Mat rmat
cv::Rodrigues(rvec, rmat); // mRmat is 3x3

cv::Mat modelToCam = cv::Mat::eye(4, 4, CV_64FC1);
modelToCam(cv::Range(0, 3), cv::Range(0, 3)) = rmat * 1.0;
modelToCam(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1.0;

cv::Mat camToModel = cv::Mat::eye(4, 4, CV_64FC1);
cv::Mat rmatinv = rmat.t();  // rotation of inverse
cv::Mat tvecinv = -rmatinv * tvec; // translation of inverse

camToModel(cv::Range(0, 3), cv::Range(0, 3)) = rmatinv * 1.0;
camToModel(cv::Range(0, 3), cv::Range(3, 4)) = tvecinv * 1.0;

在任何情况下,估计重投影误差并丢弃具有高误差的位姿也很有用(请记住,PnP 问题只有在 n=4 并且这些点共面时才有唯一解):

double totalErr = 0.0;
for (size_t i = 0; i < imagePoints.size(); i++)
{
    double err = cv::norm(cv::Mat(imagePoints[i]), cv::Mat(imagePointsRP[i]), cv::NORM_L2);
    totalErr += err*err;
}

totalErr = std::sqrt(totalErr / imagePoints.size());

【讨论】:

  • 我最终发现了这个问题,这是我的校准数据的初始化问题。感谢您提供信息丰富的回复,它教会了我记录重投影错误!
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多