echo

任生命穿梭 时间的角落

0%

利用线结构光测距

利用线结构光测距

通过线激光器扫描物体,同时用摄像机对其拍照得到带有结构光的图片,提取结构光上的点的三维坐标,激光器扫描整个物体就可求出所有点的三维坐标实现物体表面的三维重构,即可测量物体表面任意两点距离。

准备知识:

四个坐标系的转换

img

世界坐标系–摄像机坐标系

将摄像机光心定位摄像机坐标原点Oc,摄像机的光轴定位摄像机坐标系的Zc轴,Xc轴,Yc轴分别与图像坐标系的x轴y轴平行。

img

R为3阶正交单位矩阵,t为平移向量,均为相机外参数

物理坐标系–像素坐标系

图像的x,y轴分别和像素的u,v轴平行

u=x/dx+u0, v=y/dy+v0

img

img

物理坐标系–摄像机坐标系

img

像素坐标系–世界坐标系

fx,fy,u0,v0为摄像机内部参数,R,t,为外部参数

img

系统测量模型

img

P点既在OP’直线上又在光平面上(由结构光投射器与物体表面结构光构成的平面),摄像机和结构光投射器相对位置不变,光平面方程不变,设摄像机下光平面方程为

img

设点P’在摄像机坐标系下的图像坐标(x,y,1)P点坐标为(Xc,Yc,Zc)则直线OcP直线方程为

img

联立可得

img

求空间点在摄像机坐标系下的空间三维坐标需要光平面方程,P’点的图像坐标,求解P’的坐标需要摄像机内参数,所以需要摄像机标定和光平面标定。

系统实现方案

固定摄像机和激光器,移动器棋盘格拍摄带有结构光和不带结构光的图片,需要不同角度拍摄。

通过harris角点提取图片中角点像素坐标

运用张正友标定思想完成相机标定

提取线结构光方程,与图像角点直线方程结合,求角点直线和结构光的交点坐标

通过角点的像素坐标和角点与结构光的交点坐标运用交比不变性得到结构光上一系列交点的摄像机坐标,通过最小二乘拟合得到摄像机坐标系下的光平面方程

运用光平面方程,可得到结构光上任意一点的相机坐标

相机标定和光平面实现

Qt+openCV

cameraCalibrate函数原型

calibrateCamera(InputArrayOfArrays objectPoints,

InputArrayOfArrays imagePoints,

Size imageSize,

InputOutputArray cameraMatrix,

InputOutputArray distCoeffs,

OutputArrayOfArrays rvecs,

OutputArrayOfArrays tvecs,

int flags=0 );

参数 :

objectPoints 初始化世界坐标系的所有角点的三维坐标点

应输入 vector<vector> objectPoints

imagePoints 与其对应的像素坐标系的所有角点的二维坐标点

应输入 vector< vector< Point2f>> imagePoints

imageSize 图像大小

cameraMatrix 相机内参数矩阵

输入一个cv::Mat cameraMatrix即可。

​ distCoeffs 为畸变矩阵。输入一个cv::Mat distCoeffs即可

rvecs 为旋转向量 应输入一个vectorcv::Mat

tvecs 为平移向量 应输入一个vectorcv::Mat

​ flags为标定是所采用的算法。可如下某个或者某几个参数:

CV_CALIB_USE_INTRINSIC_GUESS:使用该参数时,在cameraMatrix矩阵中应该有fx,fy,cx,cy的估计值。否则的话,将初始化(cx,cy)图像的中心点,使用最小二乘估算出fx,fy。如果内参数矩阵和畸变居中已知的时候,应该标定模块中的solvePnP()函数计算外参数矩阵。

CV_CALIB_FIX_PRINCIPAL_POINT:在进行优化时会固定光轴点。当CV_CALIB_USE_INTRINSIC_GUESS参数被设置,光轴点将保持在中心或者某个输入的值。

CV_CALIB_FIX_ASPECT_RATIO:固定fx/fy的比值,只将fy作为可变量,进行优化计算。当CV_CALIB_USE_INTRINSIC_GUESS没有被设置,fx和fy将会被忽略。只有fx/fy的比值在计算中会被用到。

CV_CALIB_ZERO_TANGENT_DIST:设定切向畸变参数(p1,p2)为零。

CV_CALIB_FIX_K1,…,CV_CALIB_FIX_K6:对应的径向畸变在优化中保持不变。如果设置了CV_CALIB_USE_INTRINSIC_GUESS参数,

CV_CALIB_RATIONAL_MODEL:计算k4,k5,k6三个畸变参数。如果没有设置,则只计算其它5个畸变参数。

摄像机标定需要角点的世界坐标棋盘格表面为x-y轴,所以棋盘格角点z坐标全为0,已知两角点的实际距离就可求出所有角点的世界坐标

摄像机标定还需角点的像素坐标

利用openCV库函数findChessboardCorners()可以提取到。

带入这两个参数到cameraCalibrate函数即可求出相机内参数和每幅图对应的外参数

光平面标定

1、提取线结构光中心线方程,结构光处像素值为255利用二值化原理提取只含结构光的图像,利用goodFeaturesToTrack()取得结构光上的点,将这些点代入fitline(),拟合得到线结构光方程。

2、拟合角点直线方程,结合线结构光中心线方程,求两直线交点可得一系列光条上的点的像素坐标。

3、通过角点世界坐标,结合相机所求每一幅图的相机外参数,求出每幅图的角点摄像机坐标

4、通过摄像机内参数,将角点图像坐标转换为角点物理坐标

img

5、已知A B C 角点 D 为角点直线与结构光角点直线的交点,A’ B ‘ C’ D’为成像点,

已知A’ B ‘ C’ D’四点物理坐标或像素坐标,A B C三点相机坐标可求出D的相机坐标。

img

6、对于多幅图求出物体表面结构光与角点直线的交点坐标运用最小二乘拟合得到光平面方程

运用光平面方程求距离

鼠标点击结构光上任意两点获取两点的像素坐标通过公式

img

其中1/dx=fx,1/dy=fy,fx,fy均为摄像机内参数

然后物理坐标转为摄像机坐标

img

运用两点间距离公式可求出结构光上两点实际距离。

img

附:

最小二乘拟合平面

CvMat*points_mat = cvCreateMat(X_vector.size(), 3, CV_32FC1);

//定义用来存储需要拟合点的矩阵大小N*3;

for (int i=0;i < X_vector.size(); ++i)

{

points_mat->data.fl[i*3+0] = X_vector[i];

//矩阵的值进行初始化 X的坐标值

points_mat->data.fl[i * 3 + 1] = Y_vector[i];

// Y的坐标值

points_mat->data.fl[i * 3 + 2] = Z_vector[i];

//

// Z的坐标值

}

float plane12[4] = { 0 };//定义用来储存平面参数的数组

cvFitPlane(points_mat, plane12);//调用方程

// 其中 Plane12[4] 数组中对应ABCD;

//Ax+by+cz=D

​ void cvFitPlane(const CvMat* points, float* plane){

​ // Estimate geometric centroid.

​ int nrows = points->rows;

​ int ncols = points->cols;

​ int type = points->type;

​ CvMat* centroid = cvCreateMat(1, ncols, type);

​ cvSet(centroid, cvScalar(0));

​ for (int c = 0; c<ncols; c++){

​ for (int r = 0; r < nrows; r++)

​ {

​ centroid->data.fl[c] += points->data.fl[ncols*r + c];

​ }

​ centroid->data.fl[c] /= nrows;

​ }

​ // Subtract geometric centroid from each point.

​ CvMat* points2 = cvCreateMat(nrows, ncols, type);

​ for (int r = 0; r<nrows; r++)

​ for (int c = 0; c<ncols; c++)

​ points2->data.fl[ncolsr + c] = points->data.fl[ncolsr + c] - centroid->data.fl[c];

​ // Evaluate SVD of covariance matrix.

​ CvMat* A = cvCreateMat(ncols, ncols, type);

​ CvMat* W = cvCreateMat(ncols, ncols, type);

​ CvMat* V = cvCreateMat(ncols, ncols, type);

​ cvGEMM(points2, points, 1, NULL, 0, A, CV_GEMM_A_T);

​ cvSVD(A, W, NULL, V, CV_SVD_V_T);

​ // Assign plane coefficients by singular vector corresponding to smallest singular value.

​ plane[ncols] = 0;

​ for (int c = 0; c<ncols; c++){

​ plane[c] = V->data.fl[ncols*(ncols - 1) + c];

​ plane[ncols] += plane[c] * centroid->data.fl[c];

​ }

​ // Release allocated resources.

​ //cvReleaseMat(¢roid);

​ cvReleaseMat(&points2);

​ cvReleaseMat(&A);

​ cvReleaseMat(&W);

​ cvReleaseMat(&V);

​ }

最小二乘拟合线:

函数原型如下:

void fitLine( InputArray points,

OutputArray line,

int distType,

double param,

double reps,

double aeps );

distType 指定拟合函数的类型,可以取 CV_DIST_L2

param 就是 CV_DIST_FAIR、CV_DIST_WELSCH、CV_DIST_HUBER 公式中的C。如果取 0,则程序自动选取合适的值。

reps 表示直线到原点距离的精度,建议取 0.01。
aeps 表示直线角度的精度,建议取 0.01。

计算出的直线信息存放在 line 中,为 cv::Vec4f 类型。line[0]、line[1] 存放的是直线的方向向量。line[2]、line[3] 存放的是直线上一个点的坐标。

如果直线用 y=kx+by=kx+b 来表示,那么 k = line[1]/line[0],b = line[3] - k * line[2]

附部分源码

其中部分为ui设计和对特定图片写的去结构光的算法不可生搬硬套!

#ifndef MAINWINDOW_H

#define MAINWINDOW_H

#include

#include

#include

#include

#include

#include

#include

#include

#include

#include

#include

#include

#include “opencv2/core/core.hpp”

#include “opencv2/imgproc/imgproc.hpp”

#include “opencv2/calib3d/calib3d.hpp”

#include “opencv2/highgui/highgui.hpp”

#include <opencv2\imgproc\types_c.h>

using namespace cv;

using namespace std;

#define IMGCOUNT 14

namespace Ui {

class MainWindow;

}

class MainWindow : public QMainWindow

{

Q_OBJECT

public:

explicit MainWindow(QWidget *parent = 0);

~MainWindow\();

private slots:

void cameraCalibrate();//无结构光摄像机标定

void squareCalibrate();//有结构光摄像机标定和光平面标定

Point2f getcrosspoint(Vec4f lineA,Vec4f lineB);//找出两条直线的交点

void GetCrossPointAll();//找出所有图片的结构光交点

void calMatrix_M();//计算世界坐标系与摄像机坐标系的关系矩阵M

void calCameraCornerPoints();//计算摄像机坐标系下的角点坐标

void calCameraCrossPoints();//计算摄像机坐标下的结构光交点坐标

void calCornersInCamera(Point2f A,Point2f B,Point2f C,Point2f D,Mat A_,Mat B_,Mat C_);

void fitting_light_surface();

void cvFitPlane(const CvMat* points, float* plane);

void calDistance();

void pushbutton1();

public slots:

void open();

protected:

void mousePressEvent\(QMouseEvent *m);//重载mousePressEvent函数

private:

double zoom=1;

vector mousePoint;

QString path;

private:

Ui::MainWindow *ui;

int imageCount;

string file;

Size image_size;//图像的尺寸

Size board_size; //标定板上每列,行的角点数7 size.width 代表列数 size.height 代表 行数

vector image_points_buf; //缓存每幅图像上检测到的角点

vector<vector> object_points; //保存标定板上角点的三维坐标,为标定函数的第一个参数

vector<vector> image_points_seq; //保存检测到的所有角点

Size square_size;//实际测量得到的标定板上每个棋盘格的大小,这里其实没测,就假定了一个值,

Mat cameraMatrix;//摄像机内参数

Mat distCoeffs;//畸变系数

vector tvecsMat;//每幅图像的旋转向量

vector R_matrix;//每幅图的旋转矩阵

vector rvecsMat;//每幅图像的平移向量

vectorcorners;//结构光上的点

Vec4f line_para; //输出的直线

vector M;//摄像机坐标 世界坐标 转换系数

vector<vector>corners_in_camera;//角点在相机坐标系下的坐标

vector<vector>crossPointAll;//所有图像的结构光与角点直线的坐标

//拟合点的三维坐标

vectorX_vector;

vectorY_vector;

vectorZ_vector;

float plane12[4] = { 0 };//定义用来储存平面参数的数组

};

#endif // MAINWINDOW_H

#include “mainwindow.h”

#include “ui_mainwindow.h”

MainWindow::MainWindow(QWidget *parent) :

QMainWindow(parent),

ui(new Ui::MainWindow)

{

ui->setupUi(this);

connect(ui->actioncalibrateCamera,SIGNAL(triggered()),this,SLOT(cameraCalibrate()));

connect(ui->pushButton,SIGNAL(clicked()),this,SLOT(pushbutton1()));

connect(ui->pushButton_2,SIGNAL(clicked()),this,SLOT(open()));

ui->rows->setValue(7);ui->cols->setValue(7);

ui->filename->setText(“D:\QT projects\PIC\struct_image_copy”);

ui->dis->setValue(10);

ui->pic->setValue(11);

}

MainWindow::~MainWindow\()

{

delete ui;

}

void MainWindow::cameraCalibrate()

{

​ board_size.width=ui->cols->value();board_size.height=ui->rows->value();

​ QString f=”D:\QT projects\PIC\cal_image_copy”;

​ for (int image_num = 1; image_num <= IMGCOUNT; image_num++)

​ {

​ QString n(“\%1.bmp”);n=n.arg(image_num);

​ QString m=f+n;

​ string file=string((const char *)m.toLocal8Bit());

​ Mat imageInput = imread(file);

​ if (!findChessboardCorners(imageInput, board_size, image_points_buf))

​ {

​ cout << “can not find chessboard corners!\n”;//找不到角点

​ return;

​ }

​ else

​ {

​ Mat view_gray;//灰度图

​ cvtColor(imageInput, view_gray, CV_RGB2GRAY);

​ /亚像素精确化/

​ find4QuadCornerSubpix(view_gray, image_points_buf, Size(5, 5));//对粗提取的角点进行精确化

​ drawChessboardCorners(view_gray, board_size, image_points_buf, true);//用于在图片中标记角点

​ image_points_seq.push_back(image_points_buf);//保存亚像素角点

​ imshow(“Camera Calibration”, view_gray);//显示图片

​ waitKey(500);//停半秒

​ }

​ image_size.width = imageInput.cols;

​ image_size.height = imageInput.rows;

​ imageInput.release();

​ }

/相机标定/

​ for (int t = 0; t<IMGCOUNT; t++)

​ {

​ vector tempPointSet; //世界坐标点

​ for (int i = 0; i<board_size.width; i++)//每列 size.width代表图像的宽度 即列数

​ {

​ for (int j = 0; j<board_size.height; j++)//每行

​ {

​ Point3f realPoint;

​ //假设标定板放在世界坐标系中z=0的平面上

​ realPoint.x = j*square_size.width;

​ realPoint.y = i*square_size.height;

​ realPoint.z = 0;

​ tempPointSet.push_back(realPoint);

​ }

​ }

​ object_points.push_back(tempPointSet);

​ }

​ //内外参数对象

​ cameraMatrix = Mat(3, 3,CV_32FC1, Scalar::all(0));//摄像机内参数矩阵

​ // vector point_counts;// 每幅图像中角点的数量

​ distCoeffs = Mat(1, 5, CV_64F, Scalar::all(0));//摄像机的5个畸变系数:k1,k2,p1,p2,k3

​ calibrateCamera(object_points, image_points_seq, image_size, cameraMatrix, distCoeffs, rvecsMat, tvecsMat, 0);//相机标定

/保存内外参数/

​ ofstream fout(“caliberation_result.txt”);//保存标定结果的文件

​ fout << “相机内参数矩阵:” << endl;

​ fout << cameraMatrix << endl << endl;

​ fout << “畸变系数:\n”;

​ fout << distCoeffs << endl << endl << endl;

​ waitKey(0);//停半秒

}

void MainWindow::squareCalibrate()

{

object_points.clear();image_points_seq.clear();

for (int image_num = 1; image_num <= imageCount; image_num++)

{

QString n(“\%1.bmp”);n=n.arg(image_num);

QString m=path+n;

​ file=string((const char *)m.toLocal8Bit());

//sprintf(filenames, “D:\QT projects\PIC\struct_image_copy\%d.bmp”, image_num);

​ Mat origin_image = imread(file);

​ Mat grey_image;

​ cvtColor(origin_image, grey_image, CV_RGB2GRAY);

​ CV_Assert(grey_image.depth() != sizeof(uchar));

​ int row=grey_image.rows;

​ int col=grey_image.cols;

​ uchar*p;

/阶段去除高亮部分/

​ for(int i=0;i<row;i++)

​ {

​ p=grey_image.ptr(i);

​ for(int j=0;j<col;j++)

​ {

​ if(p[j]>250)p[j]-=100;

​ if(p[j]>220)p[j]-=70;

​ if(p[j]>190)p[j]-=30;

​ if(p[j]>170)p[j]-=10;

​ }

​ }

/去除黑方格上的亮点/

​ for(int i=0;i<row;i++)

​ {

​ p=grey_image.ptr(i);

​ for(int j=0;j<col;j++)

​ {

​ if(p[j]>140){

​ if(j<15&&p[j+15]<100)

​ p[j]=p[j+15];

​ if((j>15)&&(j<col-15))

​ {if(p[j+15]<100)p[j]=p[j+15];

​ if(p[j+15]<100)p[j]=p[j-15];

​ }

​ if(j>col-15&&p[j+15]<100)

​ p[j]=p[j-15];

​ }

​ }

​ }

/腐蚀进一步消除/

​ Mat ele = getStructuringElement(MORPH_RECT,Size(4,4));

​ Mat ele2 = getStructuringElement(MORPH_RECT,Size(2,2));

​ erode(grey_image,grey_image,ele);//erode函数直接进行腐蚀操作

​ dilate(grey_image,grey_image,ele2);//膨胀增加精确度

​ // imshow(“after erode operation”,grey_image);

​ Mat imageInput=grey_image.clone();

​ if (!findChessboardCorners(imageInput, board_size, image_points_buf,CV_CALIB_CB_FILTER_QUADS))

​ {

​ cout << “can not find chessboard corners!\n”;//找不到角点

​ return;

​ }

​ else

​ {

​ Mat view_gray=grey_image.clone();

​ /亚像素精确化/

​ find4QuadCornerSubpix(view_gray, image_points_buf, Size(5, 5));//对粗提取的角点进行精确化

​ drawChessboardCorners(view_gray, board_size, image_points_buf, true);//用于在图片中标记角点

​ image_points_seq.push_back(image_points_buf);//保存亚像素角点

​ // imshow(“Camera Calibration”, view_gray);//显示图片

​ waitKey(100);

​ }

​ image_size.width = imageInput.cols;

​ image_size.height = imageInput.rows;

​ imageInput.release();

}

/相机标定/

​ for (int t = 0; t<imageCount; t++)

​ {

​ vector tempPointSet;

​ for (int i = 0; i<board_size.width; i++)//每列

​ {

​ for (int j = 0; j<board_size.height; j++)//每行

​ {

​ Point3f realPoint;

​ //假设标定板放在世界坐标系中z=0的平面上

​ realPoint.x = j*square_size.width;//小方格的宽度

​ realPoint.y = i*square_size.height;

​ realPoint.z = 0;

​ tempPointSet.push_back(realPoint);

​ }

​ }

​ object_points.push_back(tempPointSet);

​ }

​ //内外参数对象

​ cameraMatrix = Mat(3, 3, CV_32FC1, Scalar::all(0));//摄像机内参数矩阵

​ //int point_counts;// 每幅图像中角点的数量

​ distCoeffs = Mat(1, 5, CV_32FC1, Scalar::all(0));//摄像机的5个畸变系数:k1,k2,p1,p2,k3

​ calibrateCamera(object_points, image_points_seq, image_size, cameraMatrix, distCoeffs, rvecsMat, tvecsMat, 0);//相机标定

/二值化/

​ Mat origin_image=imread(file);//

​ Mat grey_image;

​ cvtColor(origin_image, grey_image, CV_RGB2GRAY);

​ // imshow(“origin”,origin_image);

​ // imshow(“grey”,grey_image);

​ //waitKey(100);

​ Mat binary_image;

​ threshold(grey_image,binary_image,240, 255, CV_THRESH_BINARY);

​ //imshow(“binary”,binary_image);

​ //寻找二值化图像上的角点并保存

​ goodFeaturesToTrack(binary_image,corners,18,0.01,10,Mat());

​ for(unsigned int i=0;i<corners.size();i++)

​ {

​ circle(origin_image,corners[i],2,Scalar(255,0,0),2);

​ }

​ // imshow(“binary points”,origin_image);

/直线拟合/

​ fitLine(corners, line_para, cv::DIST_L2, 0, 1e-2, 1e-2);

​ cv::Point point0;

​ point0.x = line_para[2];

​ point0.y = line_para[3];

​ double k = line_para[1] / line_para[0];

​ //计算直线的端点(y = k(x - x0) + y0)

​ Point point1, point2;

​ point1.x = 0;

​ point1.y = k * (0 - point0.x) + point0.y;

​ point2.x = origin_image.cols;

​ point2.y = k * (origin_image.cols - point0.x) + point0.y;

​ cv::line(origin_image, point1, point2, cv::Scalar(255, 255, 0), 2, 8, 0);

​ // cv::imshow(“image”, origin_image);

/获取结构光与角点直线坐标/

​ GetCrossPointAll();

/将旋转向量转换成旋转矩阵/ for(int i=0;i<imageCount;i++)

​ { Mat temp;

​ Rodrigues(rvecsMat[i],temp );

​ R_matrix.push_back(temp);

​ }

/求M矩阵/

​ calMatrix_M();

/求角点的摄像机坐标/

​ calCameraCornerPoints();

/交比不变求结构光与角点直线的摄像机坐标/

​ calCameraCrossPoints();

/拟合光平面/

​ fitting_light_surface();

/保存内外参数/

​ ofstream fout(“struct_caliberation_result.txt”);//保存标定结果的文件

​ fout << “相机内参数矩阵:” << endl;

​ fout << cameraMatrix << endl << endl;

​ fout << “畸变系数:\n”;

​ fout<<distCoeffs<<endl;

​ fout<<”第一幅图角点像素坐标”<<endl<<image_points_seq[0]<<endl;

​ fout<<”第一幅图角点世界坐标”<<endl<<object_points[0]<<endl;

​ fout<<”第一幅图交点像素坐标”<<endl<<crossPointAll[0]<<endl;

​ fout<<”a b c d”<<image_points_seq[10][0]<<endl<<image_points_seq[10][3]<<endl<<image_points_seq[10][6]<<endl<<crossPointAll[1][0]<<endl;

​ fout<<”camera”<<endl<<corners_in_camera[6][0]<<endl<<corners_in_camera[6][3]<<endl<<corners_in_camera[6][6]<<endl;

​ fout<<plane12[0]<<” “<<plane12[1]<<” “<<plane12[2]<<” “<<plane12[3]<<endl;

​ for(int i=0;i<50;i++)

​ fout<<”X_vector”<<” “<<X_vector[i]<<” “<<”Y_vector”<<” “<<Y_vector[i]<<” “<<”Z_vector”<<” “<<Z_vector[i]<<endl;

​ waitKey(0);//停半秒

}

Point2f MainWindow:: getcrosspoint(Vec4f lineA,Vec4f lineB)

{ //求两条直线角点

​ double ka=lineA[1]/lineA[0];

​ double kb=lineB[1]/lineB[0];

​ Point2f cross_point;

​ cross_point.x=(lineB[3]-lineA[3]+kalineA[2]-kblineB[2])/(ka-kb);

​ cross_point.y=ka*(cross_point.x-lineA[2])+lineA[3];

​ return cross_point;

}

void MainWindow:: GetCrossPointAll()

{

for(int i=0;i<imageCount;i++)//11幅图

{

​ vectorcrossPointPerPic;

​ for(int j=0;j<board_size.height;j++)//遍历每行

​ {

​ vectortemp;//取每行角点

​ for(int m=0;m<board_size.width;m++)

​ temp.push_back(image_points_seq[i][j*7+m]);

​ Vec4f para;

​ fitLine(temp,para,DIST_L2,0,1e-2,1e-2);//拟合每行角点直线

​ Point2f temp_point=getcrosspoint(para,line_para);//得出角点直线与结构光交点

​ crossPointPerPic.push_back(temp_point);

​ }

​ crossPointAll.push_back(crossPointPerPic);

}

}

void MainWindow::calMatrix_M()

{

//利用旋转矩阵和平移向量求得摄像机坐标系和世界坐标系之间的关系矩阵M

for(int k=0;k<imageCount;k++)

{ Mat temp(4,4,CV_32F);

​ for(int i=0;i<3;i++)

​ for(int j=0;j<3;j++)

​ {temp.at(i,j)=R_matrix[k].at(i,j);

​ temp.at(0,3)=tvecsMat[k].at(0);

​ temp.at(1,3)=tvecsMat[k].at(1);

​ temp.at(2,3)=tvecsMat[k].at(2);

​ temp.at(3,0)=0;temp.at(3,1)=0;temp.at(3,2)=0;temp.at(3,3)=1;

​ }

M.push_back(temp);

}

}

void MainWindow::calCameraCornerPoints()

{//求出摄像机坐标系下的角点

for(int i=0;i<imageCount;i++)

{

vectortemp;

for(int j=0;j<board_size.height;j++)

​ for(int k=0;k<board_size.width;k++)

​ { Mat temp2(4,1,CV_32F);

​ Mat world(4,1,CV_32F);world.at(0)=object_points[i][j*7+k].x;//取出世界坐标系下的角点

​ world.at(1)=object_points[i][j7+k].y;world.at(2)=object_points[i][j7+k].z;

​ world.at(3)=1;

​ temp2=M[i]*world;//转换为摄像机坐标系

​ temp.push_back(temp2);

​ }

corners_in_camera.push_back(temp);

}

}

void MainWindow::calCameraCrossPoints()

{

for(int i=0;i<imageCount;i++)

{

​ for(int j=0;j<board_size.height;j++)

​ { //每行取三个角点

​ Point2f A=image_points_seq[i][j*7];//找到每行的第1,4,7个角点

​ Point2f B=image_points_seq[i][j*7+3];

​ Point2f C=image_points_seq[i][j*7+6];

​ Point2f D=crossPointAll[i][j];//每行的结构光交点

​ Mat A_=corners_in_camera[i][j*7];//找到对应角点在摄像机坐标下的坐标

​ Mat B_=corners_in_camera[i][j*7+3];

​ Mat C_=corners_in_camera[i][j*7+6];

​ calCornersInCamera(A,B,C,D,A_,B_,C_);//将角点坐标转换为摄像机坐标

​ }

}

}

void MainWindow::calCornersInCamera(Point2f A,Point2f B,Point2f C,Point2f D,Mat A_,Mat B_,Mat C_)

{

​ double CR=(D.x-B.x)(A.x-C.x)/((A.x-B.x)(D.x-C.x));//求出交比

​ //利用像素坐标系下求得的交比,利用交比不变性求出结构光点在摄像机下的坐标,已知,A,B,C,D像素坐标和A_,B_,C_,摄像机坐标可求D_坐标

​ //由于像素坐标系与物理坐标系是线性关系,可省略像素坐标系转图像坐标系这一步,在像素坐标系下求得的交比与在图像坐标系下求得的交比一样

​ double m=CR*(A_.at(0)-B_.at(0))/(A_.at(0)-C_.at(0));

​ double x=(B_.at(0)-m*C_.at(0))/(1-m);

​ double y=(B_.at(1)-m*C_.at(1))/(1-m);

​ double z=(B_.at(2)-m*C_.at(2))/(1-m);

​ X_vector.push_back(x); Y_vector.push_back(y); Z_vector.push_back(z);

}

void MainWindow::fitting_light_surface()

{

CvMat*points_mat = cvCreateMat(X_vector.size(), 3, CV_32FC1);

//定义用来存储需要拟合点的矩阵大小N*3;

for (unsigned int i=0;i < X_vector.size(); ++i)

​ {

​ points_mat->data.fl[i*3+0] = X_vector[i];

​ //矩阵的值进行初始化 X的坐标值

​ points_mat->data.fl[i * 3 + 1] = Y_vector[i];

​ // Y的坐标值

​ points_mat->data.fl[i * 3 + 2] = Z_vector[i];

​ //

​ // Z的坐标值

}

​ cvFitPlane(points_mat, plane12);//调用方程

}

void MainWindow::cvFitPlane(const CvMat* points, float* plane)

{

int nrows = points->rows;

int ncols = points->cols;

int type = points->type;

CvMat* centroid = cvCreateMat(1, ncols, type);

cvSet(centroid, cvScalar(0));

for (int c = 0; c<ncols; c++)

{

for (int r = 0; r < nrows; r++)

​ {

​ centroid->data.fl[c] += points->data.fl[ncols*r + c];

​ }

centroid->data.fl[c] /= nrows;

}

// Subtract geometric centroid from each point.

CvMat* points2 = cvCreateMat(nrows, ncols, type);

for (int r = 0; r<nrows; r++)

​ for (int c = 0; c<ncols; c++)

​ points2->data.fl[ncolsr + c] = points->data.fl[ncolsr + c] - centroid->data.fl[c];

// Evaluate SVD of covariance matrix.

CvMat* A = cvCreateMat(ncols, ncols, type);

CvMat* W = cvCreateMat(ncols, ncols, type);

CvMat* V = cvCreateMat(ncols, ncols, type);

cvGEMM(points2, points, 1, NULL, 0, A, CV_GEMM_A_T);

cvSVD(A, W, NULL, V, CV_SVD_V_T);

// Assign plane coefficients by singular vector corresponding to smallest singular value.

plane[ncols] = 0;

for (int c = 0; c<ncols; c++){

plane[c] = V->data.fl[ncols*(ncols - 1) + c];

plane[ncols] += plane[c] * centroid->data.fl[c];

}

// Release allocated resources.

//cvReleaseMat(¢roid);

cvReleaseMat(&points2);

cvReleaseMat(&A);

cvReleaseMat(&W);

cvReleaseMat(&V);

}

void MainWindow::mousePressEvent\(QMouseEvent *e)

{

​ static int i=0;

if(i<2)

{

​ if(i==0){ui->x1->setText(QString::number(0));ui->y1->setText(QString::number(0));

​ ui->x2->setText(QString::number(0));ui->y2->setText(QString::number(0));

​ ui->realdis->setText(QString::number(0));}

​ Point2f temp;

​ temp.x = e->x();

​ temp.y = e->y()-34;//纵坐标应减去MainWindow上方空白的长度

​ if(i==0){ui->x1->setText(QString::number(temp.x));ui->y1->setText(QString::number(temp.y));}

​ if(i==1){ui->x2->setText(QString::number(temp.x));ui->y2->setText(QString::number(temp.y));}

​ mousePoint.push_back(temp);

​ ++i;

}

else

{ //收集到两个点之后再次点击鼠标计算距离并清空mousePoint

​ if(image_points_seq.size()!=0)

​ { calDistance();

​ i-=2;mousePoint.pop_back();mousePoint.pop_back();

​ }

​ else

​ QMessageBox::warning( this, tr(“warning”),

​ tr(“未标定摄像机和光平面”

​ “”)

​ );

}

}

void::MainWindow::open()

{

QString path = QFileDialog::getOpenFileName(

​ this,

​ “文件对话框”,

​ “../“,//上一级路径

​ “Image(*.bmp *.jpg *.png)”

​ );

QImage* image=new QImage(path);

if(image->width()>1500)zoom=(double)image->width()/1500;//宽度大于1500像素进行缩放 zoom为缩放比

int width = image->width()/zoom;//缩放后的宽度

int height = image->height()/zoom;//缩放后的高度

//QPixmap fitpixmap = pixmap.scaled(width, height,Qt::KeepAspectRatio Qt::IgnoreAspectRatio, Qt::SmoothTransformation); // 饱满填充

QPixmap pixmap = QPixmap::fromImage(*image);

QPixmap fitpixmap = pixmap.scaled(width, height, Qt::IgnoreAspectRatio, Qt::SmoothTransformation); // 按比例缩放

//重新设置label的面积使图片充满整个label区域

ui->label->resize(width,height);

ui->label->setPixmap(fitpixmap);

}

void MainWindow::calDistance()

{

//获取需要的数据

//double fx=7079.108034043226; double fy=7138.477799905151;

//double u0=1385.469717666468; double v0=1009.67646851548;

//double a=0.999773;double b=-0.0105343;double c= 0.0185257;double d= -19.9609;

double fx= cameraMatrix.at(0,0);

double fy= cameraMatrix.at(1,1);

double u0=cameraMatrix.at(0,2);

double v0=cameraMatrix.at(1,2);

double a=(double)plane12[0];double b=(double)plane12[1];double c=(double)plane12[2];double d=(double)plane12[3];

Mat physic_to_pixel(3,3,CV_32F);//归一化坐标和像素坐标之间的关系矩阵

physic_to_pixel.at(0,0)=(double)fx;physic_to_pixel.at(0,1)=0;physic_to_pixel.at(0,2)=u0;

physic_to_pixel.at(1,0)=0;physic_to_pixel.at(1,1)=(double)fy;physic_to_pixel.at(1,2)=v0;

physic_to_pixel.at(2,0)=0;physic_to_pixel.at(2,1)=0;physic_to_pixel.at(2,2)=1;

//两个点的像素,摄像机,归一化物理坐标

Mat pixel1(3,1,CV_32F);Mat pixel2(3,1,CV_32F);

Mat camera1(3,1,CV_32F);Mat camera2(3,1,CV_32F);

Mat physic1(3,1,CV_32F);Mat physic2(3,1,CV_32F);

//赋值 计算

pixel1.at(0)=mousePoint[0].xzoom;pixel1.at(1)=mousePoint[0].yzoom;pixel1.at(2)=1;

pixel2.at(0)=mousePoint[1].xzoom;pixel2.at(1)=mousePoint[1].yzoom;pixel2.at(2)=1;

physic1=physic_to_pixel.inv()pixel1;physic2=physic_to_pixel.inv()pixel2;

camera1.at(0)=(d/(aphysic1.at(0)+bphysic1.at(1)+c))physic1.at(0); camera1.at(1)=(d/(aphysic1.at(0)+bphysic1.at(1)+c))physic1.at(1); camera1.at(2)=(d/(aphysic1.at(0)+bphysic1.at(1)+c))*physic1.at(2);

camera2.at(0)=(d/(aphysic2.at(0)+bphysic2.at(1)+c))physic2.at(0);camera2.at(1)=(d/(aphysic2.at(0)+bphysic2.at(1)+c))physic2.at(1);camera2.at(2)=(d/(aphysic2.at(0)+bphysic2.at(1)+c))*physic2.at(2);

//得到两点在摄像机坐标下得距离

double dis=sqrt((camera1.at(0)-camera2.at(0))(camera1.at(0)-camera2.at(0))+(camera1.at(1)-camera2.at(1))(camera1.at(1)-camera2.at(1))+(camera1.at(2)-camera2.at(2))*(camera1.at(2)-camera2.at(2)));

ui->realdis->setText(QString::number(dis));

}

void MainWindow::pushbutton1()

{

path=ui->filename->toPlainText();

board_size.width=ui->cols->value();board_size.height=ui->rows->value();

square_size.width=square_size.height=ui->dis->value();

imageCount=ui->pic->value();

squareCalibrate();

QString text(“%1x+%2y+%3z\n=%4”);text=text.arg(plane12[0]).arg(plane12[1]).arg(plane12[2]).arg(plane12[3]);

ui->surface->setText(text);

}