【问题标题】:Making Automatic Annotiation tool制作自动注释工具
【发布时间】:2021-01-10 20:16:42
【问题描述】:

我正在尝试为 yolo 对象检测制作一个自动注释工具,它使用预先训练的模型来查找检测结果,我设法整理了一些代码,但我有点卡住了,据我所知,这需要YOLO的注解格式:

18 0.154167 0.431250 0.091667 0.612500

我得到了我的代码

0.5576068858305613, 0.5410404056310654, -0.7516528169314066, 0.33822181820869446

我不知道为什么我在第三个数字上得到-,如果我需要缩短我的浮点数, 如果有人可以帮助我,我将在下面发布代码,完成此项目后,如果有人想使用它,我将发布整个代码

def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = (box[0] + box[1])/2.0
y = (box[2] + box[3])/2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)

以上代码是将坐标转换为YOLO格式的函数,对于大小需要传递(w,h),对于框需要传递(x,x+w, y, y+ h)

     net = cv2.dnn.readNetFromDarknet(config_path, weights_path)
# path_name = "images/city_scene.jpg"
path_name = image
image = cv2.imread(path_name)
file_name = os.path.basename(path_name)
filename, ext = file_name.split(".")

h, w = image.shape[:2]
# create 4D blob
blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416), swapRB=True, crop=False)

# sets the blob as the input of the network
net.setInput(blob)

# get all the layer names
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# feed forward (inference) and get the network output
# measure how much it took in seconds
start = time.perf_counter()
layer_outputs = net.forward(ln)
time_took = time.perf_counter() - start
print(f"Time took: {time_took:.2f}s")

boxes, confidences, class_ids = [], [], []
b=[]
a=[]
# loop over each of the layer outputs
for output in layer_outputs:
    # loop over each of the object detections
    for detection in output:
     # extract the class id (label) and confidence (as a probability) of
     # the current object detection
        scores = detection[5:]
        class_id = np.argmax(scores)
        confidence = scores[class_id]
    # discard weak predictions by ensuring the detected
    # probability is greater than the minimum probability
        if confidence > CONFIDENCE:
            # scale the bounding box coordinates back relative to the
            # size of the image, keeping in mind that YOLO actually
         # returns the center (x, y)-coordinates of the bounding
            # box followed by the boxes' width and height
            box = detection[0:4] * np.array([w, h, w, h])
            (centerX, centerY, width, height) = box.astype("float")

        # use the center (x, y)-coordinates to derive the top and
        # and left corner of the bounding box
            x = int(centerX - (width / 2))
            y = int(centerY - (height / 2))
            a = w, h
            convert(a, box)
            boxes.append([x, y, int(width), int(height)])

            confidences.append(float(confidence))
            class_ids.append(class_id)

   idxs = cv2.dnn.NMSBoxes(boxes, confidences, SCORE_THRESHOLD, 
 IOU_THRESHOLD)

font_scale = 1
thickness = 1


 # ensure at least one detection exists
if len(idxs) > 0:

# loop over the indexes we are keeping
    for i in idxs.flatten():


    # extract the bounding box coordinates
        x, y = boxes[i][0], boxes[i][1]
        w, h = boxes[i][2], boxes[i][3]
    # draw a bounding box rectangle and label on the image
        color = [int(c) for c in colors[class_ids[i]]]
        ba=w,h
        print(w,h)


        cv2.rectangle(image, (x, y), (x + w, y + h), color=color, thickness=thickness)
        text = "{}".format(labels[class_ids[i]])
        conf = "{:.3f}".format(confidences[i], x, y)
        int1, int2 = (x, y)
        print(text)
        #print(convert(ba, box))



        #b=w,h
        #print(convert(b, boxes))
        #print(convert(a, box)) #coordinates
        ivan = str(int1)

        b.append([text, ivan])
        #a.append(float(conf))
        #print(a)



    # calculate text width & height to draw the transparent boxes as background of the text
        (text_width, text_height) = \
        cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, fontScale=font_scale, thickness=thickness)[0]
        text_offset_x = x
        text_offset_y = y - 5
        box_coords = ((text_offset_x, text_offset_y), (text_offset_x + text_width + 2, text_offset_y - text_height))
        overlay = image.copy()
        cv2.rectangle(overlay, box_coords[0], box_coords[1], color=color, thickness=cv2.FILLED)
    # add opacity (transparency to the box)
        image = cv2.addWeighted(overlay, 0.6, image, 0.4, 0)
    # now put the text (label: confidence %)
        cv2.putText(image, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,
                fontScale=font_scale, color=(0, 0, 0), thickness=thickness)


    text = "{}".format(labels[class_ids[i]],x,y)
    conf = "{:.3f}".format(confidences[i])

【问题讨论】:

    标签: python yolo


    【解决方案1】:

    问题在于你的函数中的索引。

    box[0]=>center x
    box[1]=>center y
    box[2]=>width of your bbox
    box[3]=>height of your bbox
    

    根据文档,yolo 标签是这样的:

    <object-class> <x> <y> <width> <height>
    

    其中 x 和 y 是边界框的中心。所以你的代码应该是这样的:

    def convert(size, box):
    dw = 1./size[0]
    dh = 1./size[1]
    x = box[0]*dw
    y = box[1]*dh
    w = box[2]*dw
    h = box[3]*dh
    return (x,y,w,h)
    

    【讨论】:

    • 你好 Amir,你的答案有效,但是当我从图层获取输出时有效,它显示所有检测,但现在我只需要显示我解析的那些以及当我尝试转换时在我的代码中,数字更大(7.724965194175983, 0.7653914836415073, 1.0260124042116363, 0.9967035996286492)我发布了我的完整代码,你能建议我应该在哪里使用你的功能,谢谢
    【解决方案2】:

    也许这对你有帮助

       def bounding_box_2_yolo(obj_detections, frame, index):
        yolo_info = []
        for object_det in obj_detections:
            left_x, top_y, right_x, bottom_y = object_det.boxes
            xmin = left_x
            xmax = right_x
            ymin = top_y
            ymax = bottom_y
    
            xcen = float((xmin + xmax)) / 2 / frame.shape[1]
            ycen = float((ymin + ymax)) / 2 / frame.shape[0]
    
            w = float((xmax - xmin)) / frame.shape[1]
            h = float((ymax - ymin)) / frame.shape[0]
    
            yolo_info.append((index, xcen, ycen, w, h))
    
        return yolo_info
    

    labelimg 有很多你可以使用的东西 https://github.com/tzutalin/labelImg/blob/master/libs/yolo_io.py

    【讨论】:

    • 你好 Rafael,我更新了你能从我的代码中建议使用吗
    猜你喜欢
    • 2014-12-24
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多