【问题标题】:Glasses detection眼镜检测
【发布时间】:2014-12-12 15:44:14
【问题描述】:

我要做的是测量眼镜框的厚度。我有测量框架轮廓厚度的想法(可能是更好的方法?)。到目前为止,我已经勾勒出眼镜的框架,但是线条不相交的地方存在间隙。我考虑过使用 HoughLinesP,但我不确定这是否是我需要的。

到目前为止,我已经进行了以下步骤:

  • 将图像转换为灰度
  • 在眼睛/眼镜区域周围创建 ROI
  • 模糊图像
  • 放大图像(已这样做以移除任何薄框眼镜)
  • 进行 Canny 边缘检测
  • 找到轮廓

这些是结果:

这是我目前的代码:

//convert to grayscale
cv::Mat grayscaleImg;
cv::cvtColor( img, grayscaleImg, CV_BGR2GRAY );

//create ROI
cv::Mat eyeAreaROI(grayscaleImg, centreEyesRect);
cv::imshow("roi", eyeAreaROI);

//blur
cv::Mat blurredROI;
cv::blur(eyeAreaROI, blurredROI, Size(3,3));
cv::imshow("blurred", blurredROI);

//dilate thin lines
cv::Mat dilated_dst;
int dilate_elem = 0;
int dilate_size = 1;
int dilate_type = MORPH_RECT;

cv::Mat element = getStructuringElement(dilate_type, 
    cv::Size(2*dilate_size + 1, 2*dilate_size+1), 
    cv::Point(dilate_size, dilate_size));

cv::dilate(blurredROI, dilated_dst, element);
cv::imshow("dilate", dilated_dst);

//edge detection
int lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;    

cv::Canny(dilated_dst, dilated_dst, lowThreshold, lowThreshold*ratio, kernel_size);

//create matrix of the same type and size as ROI
Mat dst;
dst.create(eyeAreaROI.size(), dilated_dst.type());
dst = Scalar::all(0);

dilated_dst.copyTo(dst, dilated_dst);
cv::imshow("edges", dst);

//join the lines and fill in
vector<Vec4i> hierarchy;
vector<vector<Point>> contours;

cv::findContours(dilated_dst, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
cv::imshow("contours", dilated_dst);

我不完全确定接下来的步骤是什么,或者如上所述,我是否应该使用 HoughLinesP 以及如何实现它。非常感谢任何帮助!

【问题讨论】:

  • 你考虑过细分吗?通过任何必要的方式,将您的像素分为两组:(1)属于眼镜的像素(2)不属于眼镜的像素。使用超像素概念:每个像素应具有各种特征:颜色、位置、它们是否属于您已经找到的任何轮廓、它们是否在边缘等。
  • 我认为你的轮廓不太好,因为有一些差距。尝试在轮廓提取之前扩展您的精明结果,并通过在新图像上绘制它们来验证您的轮廓。如果轮廓被正确提取,您可以计算从倒置填充轮廓的距离变换。框架厚度可以近似为最大发现距离*2。
  • 嗨@William,感谢您的回复!我确实考虑过从那里进行皮肤检测和分割。也一直在研究可能的定位等。我不确定如何检测哪些像素属于什么,但会调查一下。
  • 嗨@Micka,也谢谢你的回复!很有帮助的建议!!我将在 Canny 之后添加另一个扩张并从那里开始。干杯!
  • 太好了,谢谢。如果我能找到时间,我会解决这个问题:)

标签: c++ opencv image-processing hough-transform canny-operator


【解决方案1】:

我认为有两个主要问题。

  1. 对眼镜框进行分段

  2. 求分割框的粗细

我现在将发布一种分割示例图像的眼镜的方法。也许这种方法也适用于不同的图像,但您可能需要调整参数,或者您可以使用主要思想。

主要思想是: 首先,找到图像中最大的轮廓,应该是眼镜。其次,在之前找到的最大轮廓中找到两个最大的轮廓,应该是镜框内的眼镜!

我使用这张图片作为输入(应该是你的模糊但没有放大的图片):

// this functions finds the biggest X contours. Probably there are faster ways, but it should work...
std::vector<std::vector<cv::Point>> findBiggestContours(std::vector<std::vector<cv::Point>> contours, int amount)
{
    std::vector<std::vector<cv::Point>> sortedContours;

    if(amount <= 0) amount = contours.size();
    if(amount > contours.size()) amount = contours.size();

    for(int chosen = 0; chosen < amount; )
    {
        double biggestContourArea = 0;
        int biggestContourID = -1;
        for(unsigned int i=0; i<contours.size() && contours.size(); ++i)
        {
            double tmpArea = cv::contourArea(contours[i]);
            if(tmpArea > biggestContourArea)
            {
                biggestContourArea = tmpArea;
                biggestContourID = i;
            }
        }

        if(biggestContourID >= 0)
        {
            //std::cout << "found area: " << biggestContourArea << std::endl;
            // found biggest contour
            // add contour to sorted contours vector:
            sortedContours.push_back(contours[biggestContourID]);
            chosen++;
            // remove biggest contour from original vector:
            contours[biggestContourID] = contours.back();
            contours.pop_back();
        }
        else
        {
            // should never happen except for broken contours with size 0?!?
            return sortedContours;
        }

    }

    return sortedContours;
}

int main()
{
    cv::Mat input = cv::imread("../Data/glass2.png", CV_LOAD_IMAGE_GRAYSCALE);
    cv::Mat inputColors = cv::imread("../Data/glass2.png"); // used for displaying later
    cv::imshow("input", input);

    //edge detection
    int lowThreshold = 100;
    int ratio = 3;
    int kernel_size = 3;    

    cv::Mat canny;
    cv::Canny(input, canny, lowThreshold, lowThreshold*ratio, kernel_size);
    cv::imshow("canny", canny);

    // close gaps with "close operator"
    cv::Mat mask = canny.clone();
    cv::dilate(mask,mask,cv::Mat());
    cv::dilate(mask,mask,cv::Mat());
    cv::dilate(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());
    cv::erode(mask,mask,cv::Mat());

    cv::imshow("closed mask",mask);

    // extract outermost contour
    std::vector<cv::Vec4i> hierarchy;
    std::vector<std::vector<cv::Point>> contours;
    //cv::findContours(mask, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
    cv::findContours(mask, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);


    // find biggest contour which should be the outer contour of the frame
    std::vector<std::vector<cv::Point>> biggestContour;
    biggestContour = findBiggestContours(contours,1); // find the one biggest contour
    if(biggestContour.size() < 1)
    {
        std::cout << "Error: no outer frame of glasses found" << std::endl;
        return 1;
    }

    // draw contour on an empty image
    cv::Mat outerFrame = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
    cv::drawContours(outerFrame,biggestContour,0,cv::Scalar(255),-1);
    cv::imshow("outer frame border", outerFrame);

    // now find the glasses which should be the outer contours within the frame. therefore erode the outer border ;)
    cv::Mat glassesMask = outerFrame.clone();
    cv::erode(glassesMask,glassesMask, cv::Mat());
    cv::imshow("eroded outer",glassesMask);

    // after erosion if we dilate, it's an Open-Operator which can be used to clean the image.
    cv::Mat cleanedOuter;
    cv::dilate(glassesMask,cleanedOuter, cv::Mat());
    cv::imshow("cleaned outer",cleanedOuter);


    // use the outer frame mask as a mask for copying canny edges. The result should be the inner edges inside the frame only
    cv::Mat glassesInner;
    canny.copyTo(glassesInner, glassesMask);

    // there is small gap in the contour which unfortunately cant be closed with a closing operator...
    cv::dilate(glassesInner, glassesInner, cv::Mat());
    //cv::erode(glassesInner, glassesInner, cv::Mat());
    // this part was cheated... in fact we would like to erode directly after dilation to not modify the thickness but just close small gaps.
    cv::imshow("innerCanny", glassesInner);


    // extract contours from within the frame
    std::vector<cv::Vec4i> hierarchyInner;
    std::vector<std::vector<cv::Point>> contoursInner;
    //cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
    cv::findContours(glassesInner, contoursInner, hierarchyInner, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);

    // find the two biggest contours which should be the glasses within the frame
    std::vector<std::vector<cv::Point>> biggestInnerContours;
    biggestInnerContours = findBiggestContours(contoursInner,2); // find the one biggest contour
    if(biggestInnerContours.size() < 1)
    {
        std::cout << "Error: no inner frames of glasses found" << std::endl;
        return 1;
    }

    // draw the 2 biggest contours which should be the inner glasses
    cv::Mat innerGlasses = cv::Mat::zeros(mask.rows, mask.cols, CV_8UC1);
    for(unsigned int i=0; i<biggestInnerContours.size(); ++i)
        cv::drawContours(innerGlasses,biggestInnerContours,i,cv::Scalar(255),-1);

    cv::imshow("inner frame border", innerGlasses);

    // since we dilated earlier and didnt erode quite afterwards, we have to erode here... this is a bit of cheating :-(
    cv::erode(innerGlasses,innerGlasses,cv::Mat() );

    // remove the inner glasses from the frame mask
    cv::Mat fullGlassesMask = cleanedOuter - innerGlasses;
    cv::imshow("complete glasses mask", fullGlassesMask);

    // color code the result to get an impression of segmentation quality
    cv::Mat outputColors1 = inputColors.clone();
    cv::Mat outputColors2 = inputColors.clone();
    for(int y=0; y<fullGlassesMask.rows; ++y)
        for(int x=0; x<fullGlassesMask.cols; ++x)
        {
            if(!fullGlassesMask.at<unsigned char>(y,x))
                outputColors1.at<cv::Vec3b>(y,x)[1] = 255;
            else
                outputColors2.at<cv::Vec3b>(y,x)[1] = 255;

        }

    cv::imshow("output", outputColors1);

    /*
    cv::imwrite("../Data/Output/face_colored.png", outputColors1);
    cv::imwrite("../Data/Output/glasses_colored.png", outputColors2);
    cv::imwrite("../Data/Output/glasses_fullMask.png", fullGlassesMask);
    */

    cv::waitKey(-1);
    return 0;
}

我得到这个分割结果:

原始图像中的叠加层会给您质量印象:

反之:

代码中有一些棘手的部分,尚未整理。我希望它是可以理解的。

下一步是计算分段框架的厚度。我的建议是计算反掩码的距离变换。从这里你会想要计算一个脊检测或骨架化掩码以找到脊。之后使用山脊距离的中值。

无论如何,我希望这篇文章可以帮助你一点,虽然它还不是一个解决方案。

【讨论】:

  • 嗨,Micka,非常感谢您抽出时间来帮助我。我已经运行了您的代码并得到了以下输出:i.imgur.com/aNnXOlq.png 这与您的有点不同(这将如何发生?),即内部玻璃轮廓之一尚未关闭。知道我会如何关闭它吗?在此期间,我将浏览一下网络并使用代码,看看我是否可以修复它。
  • 糟糕,忘记先模糊图像。 :)
  • 请注意,不同的图像可能会出现类似的问题!
  • 我会写第二个答案来提取厚度如果给出分割,当我找到时间时!
  • 请记住,您可以添加一些启发式方法来测试分段是否正确。外轮廓应覆盖上脸图像的大部分,而内眼镜应覆盖框架轮廓的大部分,并且两副眼镜的尺寸应非常相似!
【解决方案2】:

取决于照明、框架颜色等,这可能会或可能不会起作用,但是简单的颜色检测来分离框架怎么样?框架颜色通常会比人类皮肤暗很多。您最终会得到一个二值图像(只有黑白),并通过计算黑色像素的数量(面积)来获得框架的面积。

另一种可能的方法是通过调整/扩张/侵蚀/两者来获得更好的边缘检测,直到获得更好的轮廓。您还需要将轮廓与镜片区分开来,然后应用 cvContourArea。

【讨论】:

  • 感谢您的回复,桑尼!我对颜色检测不太确定,但我可以试一试!我认为你后面关于微调轮廓检测的建议可能效果更好,所以我也会看看效果如何。
猜你喜欢
  • 2023-04-02
  • 1970-01-01
  • 1970-01-01
  • 2015-11-27
  • 2012-10-26
  • 1970-01-01
  • 2012-03-14
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多