【发布时间】:2020-01-18 04:25:01
【问题描述】:
我正在使用 Vision 框架来检测图像中的人脸。我在 Apple 的文档中找不到输入图像的要求。通常在使用机器学习模型时,尤其是在 CoreML 中使用 .mlmodel 时,它描述了所需的输入。例如Image (Color 112 x 112)。
let image: UIImage = someUIImage()
let handler = VNImageRequestHandler(ciImage: CIImage(cgImage: (image?.cgImage)!))
let faceRequest = VNDetectFaceLandmarksRequest(completionHandler: { (request: VNRequest, error: Error?) in
guard let observations = request.results as? [VNFaceObservation]
else {
print("unexpected result type from VNFaceObservation")
return
}
self.doSomething(with observations: observations)
})
do {
try handler.perform([faceRequest])
} catch {
print("Face detection failed: \(error)")
}
【问题讨论】:
标签: swift deep-learning face-detection coreml apple-vision