这个问题不是很清楚。将点云“转换”为二维图像是什么意思?
我假设“转换”是指项目。
在 opencv 中,可以使用projectpoints:http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#projectpoints 将点云或任何 3d 点投影到 2d 图像上。
这是基于针孔相机模型,看这里例如:
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/FUSIELLO4/tutorial.html
这个问题你也不能看:
OpenCV's projectPoints function
请记住,您将无法重建原始 3D 数据(因为在投影过程中丢失了深度信息)
为简化起见,我们可以使用具有任意焦距的“完美”投影模型(没有相机镜头失真)(如果您希望显示图像,则需要根据您的数据调整焦距,这样投影点的值不会太高,例如不高于 2048,这是 2k 分辨率图像的宽度。
这是一个例子:
#include <string>
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
std::vector<cv::Point3d> Generate3DPoints();
int main(int argc, char* argv[])
{
// Read 3D points
std::vector<cv::Point3d> objectPoints = Generate3DPoints();
std::vector<cv::Point2d> imagePoints;
int f = 5; //focal length
for (unsigned int i = 0; i < objectPoints.size(); i++)
{
cv::Point3d orig_point = objectPoints[i];
imagePoints.push_back(cv::Point2d(
f*orig_point.x / orig_point.z, //x' = f*x/z
f*orig_point.y / orig_point.z) //y' = f*y/z
);
}
}
std::vector<cv::Point3d> Generate3DPoints()
{
std::vector<cv::Point3d> points;
double x, y, z;
x = .5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = .5;
points.push_back(cv::Point3d(x, y, z));
for (unsigned int i = 0; i < points.size(); ++i)
{
std::cout << points[i] << std::endl << std::endl;
}
return points;
}