【发布时间】:2021-04-26 07:36:11
【问题描述】:
我正在尝试确定从相机拍摄的 iOS 图像是否模糊。 我已经在拍照前检查了相机焦距,但是如果图像模糊,这似乎有所不同。
我使用 Open CV 在 Android 上完成了这项工作, OpenCV with Laplacian formula to detect image is blur or not in Android
结果是,
int soglia = -6118750;
if (maxLap <= soglia) { // blurry
我玩了一下这个并降低到-6718750。
对于 iOS,关于执行此操作的信息似乎较少。我看到一些人试图在 iOS 上使用 Open CV 来解决这个问题,但他们似乎并不成功。
我看到这篇文章在 iOS 上使用 Metal 来做到这一点, https://medium.com/better-programming/blur-detection-via-metal-on-ios-16dd02cb1558
这是用 Swift 编写的,所以我手动将其逐行转换为 Objective C。 我认为可能代码是正确的翻译,但不确定原始代码是否正确或一般适用于相机捕获的图像?
在我的测试中,它总是给我一个 2 的结果,无论是平均值还是方差,这如何用于检测模糊图像或任何其他想法?
- (BOOL) detectBlur: (CGImageRef)image {
NSLog(@"detectBlur: %@", image);
// Initialize MTL
device = MTLCreateSystemDefaultDevice();
queue = [device newCommandQueue];
// Create a command buffer for the transformation pipeline
id <MTLCommandBuffer> commandBuffer = [queue commandBuffer];
// These are the two built-in shaders we will use
MPSImageLaplacian* laplacian = [[MPSImageLaplacian alloc] initWithDevice: device];
MPSImageStatisticsMeanAndVariance* meanAndVariance = [[MPSImageStatisticsMeanAndVariance alloc] initWithDevice: device];
// Load the captured pixel buffer as a texture
MTKTextureLoader* textureLoader = [[MTKTextureLoader alloc] initWithDevice: device];
id <MTLTexture> sourceTexture = [textureLoader newTextureWithCGImage: image options: nil error: nil];
// Create the destination texture for the laplacian transformation
MTLTextureDescriptor* lapDesc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: sourceTexture.pixelFormat width: sourceTexture.width height: sourceTexture.height mipmapped: false];
lapDesc.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
id <MTLTexture> lapTex = [device newTextureWithDescriptor: lapDesc];
// Encode this as the first transformation to perform
[laplacian encodeToCommandBuffer: commandBuffer sourceTexture: sourceTexture destinationTexture: lapTex];
// Create the destination texture for storing the variance.
MTLTextureDescriptor* varianceTextureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: sourceTexture.pixelFormat width: 2 height: 1 mipmapped: false];
varianceTextureDescriptor.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
id <MTLTexture> varianceTexture = [device newTextureWithDescriptor: varianceTextureDescriptor];
// Encode this as the second transformation
[meanAndVariance encodeToCommandBuffer: commandBuffer sourceTexture: lapTex destinationTexture: varianceTexture];
// Run the command buffer on the GPU and wait for the results
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
// The output will be just 2 pixels, one with the mean, the other the variance.
NSMutableData* result = [NSMutableData dataWithLength: 2];
void* resultBytes = result.mutableBytes;
//var result = [Int8](repeatElement(0, count: 2));
MTLRegion region = MTLRegionMake2D(0, 0, 2, 1);
const char* bytes = resultBytes;
NSLog(@"***resultBytes: %d", bytes[0]);
NSLog(@"***resultBytes: %d", bytes[1]);
[varianceTexture getBytes: resultBytes bytesPerRow: 1 * 2 * 4 fromRegion: region mipmapLevel: 0];
NSLog(@"resultBytes: %d", bytes[0]);
NSLog(@"resultBytes: %d", bytes[1]);
int variance = (int)bytes[1];
return variance < 2;
}
【问题讨论】:
-
这是一种模糊检测方法:stackoverflow.com/questions/60587428/… 然而,答案是用 C++ 实现的,你可能会发现它很有用。
-
谢谢,但是上面的 iOS Metal 代码有什么输入吗?它似乎给了我一个值 1-5,如果模糊,通常为 2,如果不是,则为 2-5,但我想我需要更精细的粒度,为什么上述代码的方差如此之低?您的链接似乎给出了 0-500 的差异
-
能写出如此精细的快速版本吗?
标签: ios objective-c swift opencv metal