【发布时间】:2017-09-09 09:49:37
【问题描述】:
我正在尝试制作一个简单的摄像头应用程序,前置摄像头可以检测人脸。 这应该很简单:
-
创建一个继承自 UIImage 的 CameraView 类并将其放置在 UI 中。确保它实现了 AVCaptureVideoDataOutputSampleBufferDelegate 以便实时处理来自相机的帧。
class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate -
在实例化 CameraView 时调用的函数 handleCamera 中,设置 AVCapture 会话。添加来自相机的输入。
override init(frame: CGRect) { super.init(frame:frame) handleCamera() } func handleCamera () { camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .front) session = AVCaptureSession() // Set recovered camera as an input device for the capture session do { try input = AVCaptureDeviceInput(device: camera); } catch _ as NSError { print ("ERROR: Front camera can't be used as input") input = nil } // Add the input from the camera to the capture session if (session?.canAddInput(input) == true) { session?.addInput(input) } -
创建输出。创建一个串行输出队列以传递数据,然后由 AVCaptureVideoDataOutputSampleBufferDelegate(在本例中为类本身)处理该数据。将输出添加到会话。
output = AVCaptureVideoDataOutput() output?.alwaysDiscardsLateVideoFrames = true outputQueue = DispatchQueue(label: "outputQueue") output?.setSampleBufferDelegate(self, queue: outputQueue) // add front camera output to the session for use and modification if(session?.canAddOutput(output) == true){ session?.addOutput(output) } // front camera can't be used as output, not working: handle error else { print("ERROR: Output not viable") } -
设置相机预览视图并运行会话
// Setup camera preview with the session input previewLayer = AVCaptureVideoPreviewLayer(session: session) previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait previewLayer?.frame = self.bounds self.layer.addSublayer(previewLayer!) // Process the camera and run it onto the preview session?.startRunning() -
在委托运行的 captureOutput 函数中,将接收到的样本缓冲区转换为 CIImage 以检测人脸。如果找到人脸,请提供反馈。
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!) let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh] let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy) let faces = faceDetector?.features(in: cameraImage) for face in faces as! [CIFaceFeature] { print("Found bounds are \(face.bounds)") let faceBox = UIView(frame: face.bounds) faceBox.layer.borderWidth = 3 faceBox.layer.borderColor = UIColor.red.cgColor faceBox.backgroundColor = UIColor.clear self.addSubview(faceBox) if face.hasLeftEyePosition { print("Left eye bounds are \(face.leftEyePosition)") } if face.hasRightEyePosition { print("Right eye bounds are \(face.rightEyePosition)") } } }
我的问题:我可以让相机运行,但是我从互联网上尝试了许多不同的代码,我从来没有能够让 captureOutput 来检测人脸。要么应用程序没有进入函数,要么由于变量不起作用而崩溃,最常见的情况是 sampleBuffer 变量为 nul。 我做错了什么?
【问题讨论】:
标签: ios swift camera face-detection core-image