【问题标题】:iOS camera facetracking (Swift 3 Xcode 8)iOS 相机面部跟踪(Swift 3 Xcode 8)
【发布时间】:2017-09-09 09:49:37
【问题描述】:

我正在尝试制作一个简单的摄像头应用程序,前置摄像头可以检测人脸。 这应该很简单:

  • 创建一个继承自 UIImage 的 CameraView 类并将其放置在 UI 中。确保它实现了 AVCaptureVideoDataOutputSampleBufferDelegate 以便实时处理来自相机的帧。

    class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate 
    
  • 在实例化 CameraView 时调用的函数 handleCamera 中,设置 AVCapture 会话。添加来自相机的输入。

    override init(frame: CGRect) {
        super.init(frame:frame)
    
        handleCamera()
    }
    
    func handleCamera () {
        camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,
                                               mediaType: AVMediaTypeVideo, position: .front)
        session = AVCaptureSession()
    
        // Set recovered camera as an input device for the capture session
        do {
            try input = AVCaptureDeviceInput(device: camera);
        } catch _ as NSError {
            print ("ERROR: Front camera can't be used as input")
            input = nil
        }
    
        // Add the input from the camera to the capture session
        if (session?.canAddInput(input) == true) {
            session?.addInput(input)
        }
    
  • 创建输出。创建一个串行输出队列以传递数据,然后由 AVCaptureVideoDataOutputSampleBufferDelegate(在本例中为类本身)处理该数据。将输出添加到会话。

        output = AVCaptureVideoDataOutput()
    
        output?.alwaysDiscardsLateVideoFrames = true    
        outputQueue = DispatchQueue(label: "outputQueue")
        output?.setSampleBufferDelegate(self, queue: outputQueue)
    
        // add front camera output to the session for use and modification
        if(session?.canAddOutput(output) == true){
            session?.addOutput(output)
        } // front camera can't be used as output, not working: handle error
        else {
            print("ERROR: Output not viable")
        }
    
  • 设置相机预览视图并运行会话

        // Setup camera preview with the session input
        previewLayer = AVCaptureVideoPreviewLayer(session: session)
        previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
        previewLayer?.frame = self.bounds
        self.layer.addSublayer(previewLayer!)
    
        // Process the camera and run it onto the preview
        session?.startRunning()
    
  • 在委托运行的 captureOutput 函数中,将接收到的样本缓冲区转换为 CIImage 以检测人脸。如果找到人脸,请提供反馈。

    func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    
    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
    
    
    let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
    let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
    let faces = faceDetector?.features(in: cameraImage)
    
    for face in faces as! [CIFaceFeature] {
    
          print("Found bounds are \(face.bounds)")
    
          let faceBox = UIView(frame: face.bounds)
    
          faceBox.layer.borderWidth = 3
          faceBox.layer.borderColor = UIColor.red.cgColor
          faceBox.backgroundColor = UIColor.clear
          self.addSubview(faceBox)
    
          if face.hasLeftEyePosition {
              print("Left eye bounds are \(face.leftEyePosition)")
          }
    
          if face.hasRightEyePosition {
              print("Right eye bounds are \(face.rightEyePosition)")
          }
      }
    }
    

我的问题:我可以让相机运行,但是我从互联网上尝试了许多不同的代码,我从来没有能够让 captureOutput 来检测人脸。要么应用程序没有进入函数,要么由于变量不起作用而崩溃,最常见的情况是 sampleBuffer 变量为 nul。 我做错了什么?

【问题讨论】:

    标签: ios swift camera face-detection core-image


    【解决方案1】:

    您需要将 captureOutput 函数参数更改为以下内容:func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)

    您的captureOutput 函数在缓冲区丢失时调用,而不是在从相机获取时调用。

    【讨论】:

    • 我实际上是在实习期间的一名 iOS 开发人员的帮助下发现的,但忘记更新问题了。这实际上是缺少的所有内容,感谢您浏览并希望这对其他人有所帮助。
    • 您能否顺利运行检测?我什至尝试过使用 CIDetectorAccuracyLow,但当我实时打开人脸检测时,视图看起来有点迟钝。
    猜你喜欢
    • 2018-06-15
    • 1970-01-01
    • 1970-01-01
    • 2017-07-02
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2018-04-26
    • 2016-12-19
    相关资源
    最近更新 更多