【问题标题】:How to obtain and store centroid coordinates of rooms of a floor plan image?如何获取和存储平面图房间的质心坐标?
【发布时间】:2020-03-25 16:32:12
【问题描述】:

我有一个包含多个房间的平面图。使用 Python,我想找到每个房间的中心并以 (x,y) 的形式存储坐标,以便我可以进一步使用它们进行数学计算。现有的drawContoursFindContours 函数有助于确定轮廓,但是如何将获得的值存储到列表中。

该图片代表一个包含多个房间的示例平面图。

我尝试使用moments,但该功能无法正常工作。 As you may see this image is obtained from drawContours function. But then how do I store the x and y coordinates.

这是我的代码:

k= []
# Going through every contours found in the image. 
for cnt in contours : 

    approx = cv2.approxPolyDP(cnt, 0.009 * cv2.arcLength(cnt, True), True) 

    # draws boundary of contours. 
    cv2.drawContours(img, [approx], -1, (0, 0,255), 3)  

    # Used to flatted the array containing 
    # the co-ordinates of the vertices. 
    n = approx.ravel()  
    i = 0
    x=[]
    y=[]

    for j in n : 
        if(i % 2 == 0): 
            x = n[i] 
            y = n[i + 1]



            # String containing the co-ordinates. 
            string = str(x) + " ," + str(y)  


            if(i == 0): 
                # text on topmost co-ordinate. 
                cv2.putText(img, string, (x, y), 
                                font, 0.5, (255, 0, 0))
                k.append(str((x, y))) 


            else: 
                # text on remaining co-ordinates. 
                cv2.putText(img, string, (x, y),  
                          font, 0.5, (0, 255, 0))  
                k.append(str((x, y)))

        i = i + 1


# Showing the final image. 
cv2_imshow( img )  
# Exiting the window if 'q' is pressed on the keyboard. 
if cv2.waitKey(0) & 0xFF == ord('q'):  
    cv2.destroyAllWindows()

【问题讨论】:

  • 你说时刻不起作用。但是你没有显示你的代码。请始终提供您的代码,以便其他人可以看到您是否有错误。一旦你有了轮廓,你就可以得到边界框,你可以从中轻松计算出中心。
  • 暂时不,它给出了 ZeroDivisionError。这就是我没有包含它的原因
  • 您仍然没有显示您的质心代码。给定有效轮廓,您应该能够从以下位置获取质心:` M = cv2.moments(cntr) cx = int(M["m10"] / M["m00"]) cy = int(M["m01" ] / M["m00"]) `

标签: python image opencv image-processing computer-vision


【解决方案1】:

这是一个简单的方法:

  1. 获取二值图像。加载图像、灰度和Otsu's threshold

  2. 删除文本。我们find contours 然后使用contour area 过滤以删除小于某个阈值的轮廓。我们通过使用cv2.drawContours 填充这些轮廓来有效地删除它们。

  3. 找到矩形框并获得质心坐标。我们再次找到轮廓,然后使用轮廓区域和contour approximation 进行过滤。然后我们找到每个轮廓的矩,这给了我们centroid


这是一个可视化:

删除文本

结果

坐标

[(93, 241), (621, 202), (368, 202), (571, 80), (317, 79), (93, 118)]

代码

import cv2
import numpy as np

# Load image, grayscale, Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]

# Remove text
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
    area = cv2.contourArea(c)
    if area < 1000:
        cv2.drawContours(thresh, [c], -1, 0, -1)

thresh = 255 - thresh
result = cv2.cvtColor(thresh, cv2.COLOR_GRAY2BGR)
coordinates = []

# Find rectangular boxes and obtain centroid coordinates
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
    area = cv2.contourArea(c)
    peri = cv2.arcLength(c, True)
    approx = cv2.approxPolyDP(c, 0.05 * peri, True)
    if len(approx) == 4 and area < 100000:
        # cv2.drawContours(result, [c], -1, (36,255,12), 1)
        M = cv2.moments(c)
        cx = int(M['m10']/M['m00'])
        cy = int(M['m01']/M['m00'])
        coordinates.append((cx, cy))
        cv2.circle(result, (cx, cy), 3, (36,255,12), -1)
        cv2.putText(result, '({}, {})'.format(int(cx), int(cy)), (int(cx) -40, int(cy) -10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (36,255,12), 2)

print(coordinates)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.imshow('result', result)
cv2.waitKey()

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2019-07-02
    • 2014-10-21
    相关资源
    最近更新 更多