【问题标题】:classification of gravel/aggregate砾石/集料的分类
【发布时间】:2022-01-16 23:23:17
【问题描述】:

我尝试测量单个砾石颗粒的宽度。我需要这个来识别它是细砾石还是粗砾石。你能帮我吗,我怎样才能找到砾石轮廓的两个极端部分? 到目前为止,我试图只从图片中获取轮廓。 (代码下的照片) 我当前的代码:

import cv2
import numpy as np
def empty(a):
    pass
path = "materials/gr2.jpeg"
path2 = "materials/gr1.jpeg"
cv2.namedWindow("TrackBars")
#cv2.resizeWindow("TrackBars",740,280)

cv2.createTrackbar("Hue Min", "TrackBars",0,179,empty)
cv2.createTrackbar("Hue Max", "TrackBars",179,179,empty)
cv2.createTrackbar("Sat Min", "TrackBars",0,255,empty)
cv2.createTrackbar("Sat Max", "TrackBars",255,255,empty)
cv2.createTrackbar("Val Min", "TrackBars",147,255,empty)
cv2.createTrackbar("Val Max", "TrackBars",255,255,empty)

img = cv2.imread(path)
img2 = cv2.imread(path2)
imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
imgHSV2 = cv2.cvtColor(img2, cv2.COLOR_BGR2HSV)

while True:
    h_min = cv2.getTrackbarPos("Hue Min", "TrackBars")
    h_max = cv2.getTrackbarPos("Hue Max", "TrackBars")
    s_min = cv2.getTrackbarPos("Sat Min", "TrackBars")
    s_max = cv2.getTrackbarPos("Sat Max", "TrackBars")
    v_min = cv2.getTrackbarPos("Val Min", "TrackBars")
    v_max = cv2.getTrackbarPos("Val Max", "TrackBars")
    print(h_min,h_max,s_min,s_max,v_min,v_max)
    lower = np.array([h_min,s_min,v_min])
    upper = np.array([h_max,s_max,v_max])
    mask = cv2.inRange(imgHSV,lower,upper)
    mask2 = cv2.inRange(imgHSV2,lower,upper)


    cv2.imshow("Mask2", mask2)
    cv2.imshow("Mask", mask)
    cv2.waitKey(1)

【问题讨论】:

  • 你需要“纹理分析”。你可能从傅立叶变换中得到一些东西。您还需要知道相机的视野和到砾石的距离,或者保持这两个因素不变。 -- 你提供的代码在估计图片中砾石的粒度方面没有任何作用。
  • 我没有在傅立叶中看到立即有用的东西,但是高斯的差异(带通)可以帮助对其进行分类。无论如何,纹理分析。广阔的领域,大量的出版物。
  • 我认为这些图片不适合测量单个颗粒的大小。
  • 在一张图片中测量平均粒度还不够吗?
  • 您可以尝试“超像素”作为其近似值。这是一种纹理敏感的分割,除了它比“对象”更进一步地分解事物。对于这些砾石的照片,它可能会起作用。看到结果会很有趣。 docs.opencv.org/3.4/df/d6c/group__ximgproc__superpixel.html

标签: python opencv object-detection


【解决方案1】:

以下是一些使用拉普拉斯金字塔的统计数据。

忽略顶部的几个级别,这是由于光照不均匀以及干湿砾石造成的。

您可以看到,在较低/较细的级别(接近 10)中,您会从细砾石中获得更多的响应,而粗砾石的响应会达到更高的级别。

coarse vs fine
[ 0]   351399 :   385660 # ignore that, that's the DC component
[ 1]       75 :       95
[ 2]      177 :      184
[ 3]      130 :       78
[ 4]      408 :       94
[ 5]     1352 :      215
[ 6]     4051 :      706
[ 7]     7784 :     2123
[ 8]     8521 :     4814
[ 9]     6838 :     8108
[10]     8207 :    12775
#!/usr/bin/env python3

import os
import sys
from math import *
import numpy as np
import cv2 as cv

np.set_printoptions(suppress=True, linewidth=120)

im1 = cv.imread("coarse dCrrR.jpg", cv.IMREAD_GRAYSCALE)
im2 = cv.imread("fine xvmKD.jpg", cv.IMREAD_GRAYSCALE)

levels = 10
sw = sh = 2**levels

def take_sample(im):
    h,w = im.shape[:2]
    return im[(h-sh) // 2 : (h+sh) // 2, (w-sw) // 2 : (w+sw) // 2]

def gaussian_pyramid(sample):
    gp = [sample]
    for k in range(levels):
        sample = cv.pyrDown(sample)
        gp.append(sample)
    return gp

def laplacian_pyramid(gp):
    lp = [gp[-1]] # "base" gaussian

    for k in reversed(range(levels)):
        diff = gp[k] - cv.pyrUp(gp[k+1])
        lp.append(diff)

    return lp

sample1 = take_sample(im1) * np.float32(1/255)
sample2 = take_sample(im2) * np.float32(1/255)

gp1 = gaussian_pyramid(sample1)
gp2 = gaussian_pyramid(sample2)

lp1 = laplacian_pyramid(gp1)
lp2 = laplacian_pyramid(gp2)

print("coarse vs fine")

for i,(level1,level2) in enumerate(zip(lp1, lp2)):
    area = 2**(2*i)

    sse1 = (level1**2).sum() / area
    sse2 = (level2**2).sum() / area
    print(f"[{i:2d}] {sse1*1e6:8.0f} : {sse2*1e6:8.0f}")

【讨论】:

    【解决方案2】:

    如果粗略估计图像中的平均砾石大小就足够了,您可以尝试这个算法,它确实非常粗略,而且可能不准确(您必须对更多图像进行测试,无论它是否在统计上感觉)

    1. threshold for bright gravel stones
    2. subtract the edges between stones
    3. extract external contours and discard too small ones (magic number)
    4. choose any comparison point in the sorted contour area list (e.g. median = 50%)
    

    给出这些结果:

    精细图像 => 中值轮廓面积:31

    粗略图像 => 中值轮廓区域 89.5

    使用这些用于计算轮廓的掩码图像:

    从此源代码:

    int main()
    {
        try
        {       
            cv::Mat img = cv::imread("C:/data/StackOverflow/gravel/coarse.jpg", cv::IMREAD_GRAYSCALE);
            cv::Mat thresh;
            double t = cv::threshold(img, thresh, 255, 255, cv::THRESH_OTSU | cv::THRESH_BINARY);
            //thresh = img > t * 1.5; // doesnt work as well as removing the edges
    
            cv::Mat sobelX, sobelY;
            cv::Sobel(img, sobelX, CV_32FC1, 1, 0, 3, 1.0, 0);
            cv::Sobel(img, sobelY, CV_32FC1, 0, 1, 3, 1.0, 0);
    
            cv::Mat sobelMag_FC;
            cv::Mat sobelMag_8U;
            cv::Mat sobelBin;
            cv::magnitude(sobelX, sobelY, sobelMag_FC);
            sobelMag_FC.convertTo(sobelMag_8U, CV_8U);
            double t2 = cv::threshold(sobelMag_8U, sobelBin, 255, 255, cv::THRESH_OTSU | cv::THRESH_BINARY);
            //sobelBin = sobelMag_FC > 150; // doesnt work as well
    
            cv::Mat gravel = thresh - sobelBin;
    
            std::vector<std::vector<cv::Point> > contours;
            cv::findContours(gravel, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
            std::vector<double> contourAreas;
            for (int i = 0; i < contours.size(); ++i)
            {
                double area = cv::contourArea(contours[i]);
                if(area > 10) contourAreas.push_back(area); // magic number for minimum contour area...
            }
            std::sort(contourAreas.begin(), contourAreas.end());
            // choose a single comparison point (e.g. 50% position in the list => median)
            std::cout << contourAreas.at(contourAreas.size() *0.5) << std::endl;
    
            cv::imwrite("C:/data/StackOverflow/gravel/coarse_mask.png", gravel);
    
        }
        catch (std::exception& e)
        {
            std::cout << e.what() << std::endl;
            std::cin.get();
        }
    }
    

    您可以看到有许多微小的轮廓和连接的砾石等。所以整个算法最终可能是垃圾,但为两张图像提供了或多或少合理的相对尺寸结果。

    添加 1000 的 max-contour-size 并测试更多比较点:

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2015-09-12
      • 1970-01-01
      • 2015-11-21
      相关资源
      最近更新 更多