【问题标题】:TypeError: 'NoneType' object is not subscriptable opencv-python / python face_recognitionTypeError:“NoneType”对象不可下标 opencv-python / python face_recognition
【发布时间】:2020-06-27 07:34:57
【问题描述】:

我有 face_recognition python 脚本,如果执行,运行良好,但如果执行它会随机显示错误:

  Traceback (most recent call last):
    File "faces_test.py", line 38, in <module>
      rgb_frame = frame[:, :, ::-1]
  TypeError: 'NoneType' object is not subscriptable

该错误显示有时随机运行有时会出现该错误。我不明白发生了什么,但我之前将 api.py 的 face_recognition 容差从 0.6 更改为 0.4。我不确定 chage 会对 opencv 造成随机错误

我想运行我的脚本并且不会收到任何类似的随机错误,有什么解决方案吗?

环境版本:

  • python = 3.8.2
  • opencv-contrib-python 4.2.0.32
  • opencv-python 4.2.0.32
  • 人脸识别 1.3.0
  • 人脸识别模型 0.3.0

我的python脚本:

import face_recognition
import cv2
import numpy as np

# This is a super simple (but slow) example of running face recognition on live video from your webcam.
# There's a second example that's a little more complicated but runs faster.

# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.

# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0 + cv2.CAP_DSHOW)

# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("./training/obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("./training/biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]

# Create arrays of known face encodings and their names
known_face_encodings = [
    obama_face_encoding,
    biden_face_encoding
]
known_face_names = [
    "obama",
    "biden"
]

while True:
    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_frame = frame[:, :, ::-1]

    # Find all the faces and face enqcodings in the frame of video
    face_locations = face_recognition.face_locations(rgb_frame)
    face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)

    # Loop through each face in this frame of video
    for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
        # See if the face is a match for the known face(s)
        matches = face_recognition.compare_faces(known_face_encodings, face_encoding)

        name = "Unknown"

        # If a match was found in known_face_encodings, just use the first one.
        # if True in matches:
        #     first_match_index = matches.index(True)
        #     name = known_face_names[first_match_index]

        # Or instead, use the known face with the smallest distance to the new face
        face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
        best_match_index = np.argmin(face_distances)
        #print(face_distances)
        #print(best_match_index)
        if matches[best_match_index]:
            name = known_face_names[best_match_index]
            print(name)

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

【问题讨论】:

  • 错误消息显示video_capture.read() 返回了None。你必须找出原因。

标签: python python-3.x opencv webcam face-recognition


【解决方案1】:

抱歉,回复晚了,但如果您仍在使用 OpenCV 进行编码,也许这会有所帮助。

第一条评论

您可能没有通过网络摄像头获取任何视频。

比如看这段代码

# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0 + cv2.CAP_DSHOW)

有时相机会在插槽“1”而不是“0”中初始化。重要的是要考虑到这些初始化可以反弹。如果您想确保正确识别相机,您始终可以在变量中明确定义相机位置并进行一些错误处理。

问题一示例:

def setup_camera():
    """
    Initialize camera by calling cv2.VideoCapture
    Produces a variable that you can read from using self.capture.read
    If camera doesn't connect prints error and Camera not conntected
    """

    device = '/dev/v4l/by-id/usb-HD_Camera_Manufacturer_USB_2.0_Camera-video-index0'

    try:
        capture = cv2.VideoCapture(device)
        print("Camera Connection Successful")
        self.capture.release()
    except IOError as e:
        print(e)
        print("Camera Not Connected")

这至少可以帮助您入门,并告诉您摄像头是否没有记录。

如果您希望能够在您的系统上轻松找到相机(并且您使用的是 linux - 我是一个 ubuntu 人,如果您是 windows,那么很抱歉)您可以运行一个简单的包,如 v4l-utils 来追踪相机。类似于以下代码:

sudo apt-get install v4l-utils
v4l2-ctl --list-devices

应该给你如下回复:

USB 2.0 Camera: HD USB Camera (usb-0000:00:1a.0-1.5):
    /dev/video0

然后您可以使用 udevadm 运行命令以获取有关 /dev/video0 的更多信息:

udevadm info --query=all --name=/dev/video0

这将返回如下内容:

P: /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.5/1-1.5:1.0/video4linux/video0
N: video0
S: v4l/by-id/usb-HD_Camera_Manufacturer_USB_2.0_Camera-video-index0
S: v4l/by-path/pci-0000:00:1a.0-usb-0:1.5:1.0-video-index0
E: COLORD_DEVICE=1
E: COLORD_KIND=camera
E: DEVLINKS=/dev/v4l/by-path/pci-0000:00:1a.0-usb-0:1.5:1.0-video-index0 /dev/v4l/by-id/usb-HD_Camera_Manufacturer_USB_2.0_Camera-video-index0
E: DEVNAME=/dev/video0
E: DEVPATH=/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.5/1-1.5:1.0/video4linux/video0
E: ID_BUS=usb
E: ID_FOR_SEAT=video4linux-pci-0000_00_1a_0-usb-0_1_5_1_0
E: ID_MODEL=USB_2.0_Camera
E: ID_MODEL_ENC=USB\x202.0\x20Camera
E: ID_MODEL_ID=9230
E: ID_PATH=pci-0000:00:1a.0-usb-0:1.5:1.0
E: ID_PATH_TAG=pci-0000_00_1a_0-usb-0_1_5_1_0
E: ID_REVISION=0100
E: ID_SERIAL=HD_Camera_Manufacturer_USB_2.0_Camera
E: ID_TYPE=video
E: ID_USB_DRIVER=uvcvideo
E: ID_USB_INTERFACES=:0e0100:0e0200:
E: ID_USB_INTERFACE_NUM=00
E: ID_V4L_CAPABILITIES=:capture:
E: ID_V4L_PRODUCT=USB 2.0 Camera: HD USB Camera
E: ID_V4L_VERSION=2
E: ID_VENDOR=HD_Camera_Manufacturer
E: ID_VENDOR_ENC=HD\x20Camera\x20Manufacturer
E: ID_VENDOR_ID=05a3
E: MAJOR=81
E: MINOR=0
E: SUBSYSTEM=video4linux
E: TAGS=:uaccess:seat:
E: USEC_INITIALIZED=23692957

如您所见,DEVLINKS 中的第二个值提供了可用于调用相机的硬件位置。

第二条评论

OpenCV 内置了对帧进行转换的函数。您的方法和我将使用的示例都在this post 中。为了更深入地了解事物,让我们尝试一下。

例如-

你用的代码方法没问题:

# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)

rgb_frame = frame[:, :, ::-1]

如果不起作用,请尝试使用此问题二示例

rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

结论

我希望这个回复有所帮助。我认为这不是转换问题,但很可能是相机初始化问题。尝试获取更准确的摄像头馈送位置,这将为您提供更强大的摄像头调用方式。祝你有美好的一天!

【讨论】:

  • 我差点忘了我的问题,因为待定项目,待定并不意味着取消,所以感谢您回答我的问题。现在我明白了这个问题,不用担心我也是 linux 用户。我会试试你的答案。谢谢,希望这会解决
【解决方案2】:

主要部分是测试对象是否为无,如评论中所见“错误消息说 video_capture.read() 返回无”

video_capture.read()

如果类型为none,则跳出for循环,如下所示。

for i in range(1,length-1):

    ret, frame = video_capture.read()
        print(type(frame))
        
        if frame is None:
            print('Noneeeee')
            break

完整的代码示例如下所示。整个代码的参考是-https://expoleet.medium.com/flask-opencv-face-recognition-b9cbc3d1d280

import os 
import sys
from flask import Flask, request, redirect, url_for, render_template, flash, jsonify
from werkzeug.utils import secure_filename
import face_recognition
import cv2
import numpy as np
from flask_cors import CORS, cross_origin

UPLOAD_FOLDER = '/root/backendOne/pyflaskone/upload'

ALLOWED_EXTENSIONS = set(['txt','mp4'])
app = Flask(__name__)

CORS(app, support_credentials=True)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.secret_key = "secret key"
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024

def allowed_file(filename):
    return '.' in filename and \
           filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS
NEWPATH=""
@app.route("/", methods=['GET', 'POST'])
def index():
    
    if request.method == 'POST':
        print('request recieved....')
        
        image_file = request.files['video']
        print(image_file)
       
        filename = "aaa.mp4"
        
        image_file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
       
        print('file savedd.....', filename)
        NEWPATH=videotest(os.path.join(app.config['UPLOAD_FOLDER'], filename))
        if(NEWPATH == "Unknown"):
            flash('No matches for attendance')
        else:    
            flash('File successfully uploaded', NEWPATH)
            print(NEWPATH,'NEWPATH....')
            
           
        return {"status": "Success", "userName": NEWPATH}
        
    
    #else 
    if request.method == 'GET':
        return """
        <!doctype html>
        <title>Upload new File</title>
        <h1>Upload new File</h1>
        <form action="" method=post enctype=multipart/form-data>
        <p><input type=file name=file>
            <input type=submit value=Upload>
        </form>
        <p>%s</p>
        """ % "<br>".join(os.listdir(app.config['UPLOAD_FOLDER'],))


def videotest(filename):
    video_capture = cv2.VideoCapture(filename)
    length = int(video_capture.get(cv2.CAP_PROP_FRAME_COUNT))
    fps = int(video_capture.get(cv2.CAP_PROP_FPS))

    harmesh = face_recognition.load_image_file("harmesh.jpg")
    hfencoding = face_recognition.face_encodings(harmesh)[0]

    prateek = face_recognition.load_image_file("prateek.jpeg")
    pfencoding = face_recognition.face_encodings(prateek)[0]

    krishna = face_recognition.load_image_file("BusinessCards.jpg")
    kfencoding = face_recognition.face_encodings(krishna)[0]
    
    balaji = face_recognition.load_image_file("Balaji-Photo.jpg")
    bfencoding = face_recognition.face_encodings(balaji)[0]

    known_face_encodings = [
        hfencoding,
        pfencoding,
        kfencoding,
        bfencoding
    ]
    known_face_names = [
        "Harmesh",
        "Prateek",
        "Krrrr",
        "Balaji"
    ]
    name = "Unknown"
    width  = int(video_capture.get(3)) # float
    height = int(video_capture.get(4))

    for i in range(1,length-1):
        
        ret, frame = video_capture.read()
        print(type(frame))
        
        if frame is None:
            print('Noneeeee')
            break
        rgb_frame = frame[:, :, ::-1]    
        
        face_locations = face_recognition.face_locations(rgb_frame)
        face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
        for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
        
            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            print(best_match_index)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]
                print('if matchesss', name)
    
    video_capture.release()
    #out.release()
    cv2.destroyAllWindows()
    print('final name', name)
    return name
    

    
if __name__ == "__main__":
    app.run(host ='0.0.0.0')
    

【讨论】:

    猜你喜欢
    • 2021-12-14
    • 1970-01-01
    • 1970-01-01
    • 2021-02-14
    • 2019-08-12
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多