【发布时间】:2016-02-24 13:30:49
【问题描述】:
对不起,对于这个问题,我知道有一个类似的问题,但我无法得到答案。可能是我这边的一些愚蠢的错误;-)
我想在 iOS 上用 Alpha 覆盖两个图像。从两个视频中获取的图像,由 AssetReader 读取并存储在两个 CVPixelBuffer 中。我知道 Alpha 通道没有存储在视频中,所以我从第三个文件中获取它。所有数据看起来都很好。问题是覆盖,如果我在屏幕上使用 [CIContext drawImage] 一切都很好! 但是,如果我在屏幕外进行操作,因为视频格式与屏幕格式不同,我无法让它工作: 1. drawImage 确实有效,但仅在屏幕上 2. render:toCVPixelBuffer 有效,但忽略了 Alpha 3. CGContextDrawImage 似乎什么都不做(甚至没有错误信息)
那么有人可以告诉我哪里出了问题:
初始化: ...(之前有很多代码) 设置色彩空间和位图上下文
if(outputContext)
{
CGContextRelease(outputContext);
CGColorSpaceRelease(outputColorSpace);
}
outputColorSpace = CGColorSpaceCreateDeviceRGB();
outputContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
... (后面有很多代码)
绘图:
CIImage *backImageFromSample;
CGImageRef frontImageFromSample;
CVImageBufferRef nextImageBuffer = myPixelBufferArray[0];
CMSampleBufferRef sampleBuffer = NULL;
CMSampleTimingInfo timingInfo;
//draw the frame
CGRect toRect;
toRect.origin.x = 0;
toRect.origin.y = 0;
toRect.size = videoFormatSize;
//background image always full size, this part seems to work
if(drawBack)
{
CVPixelBufferLockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
backImageFromSample = [CIImage imageWithCVPixelBuffer:backImageBuffer];
[coreImageContext render:backImageFromSample toCVPixelBuffer:nextImageBuffer bounds:toRect colorSpace:rgbSpace];
CVPixelBufferUnlockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
}
else
[self clearBuffer:nextImageBuffer];
//Front image doesn't seem to do anything
if(drawFront)
{
unsigned long int numBytes = CVPixelBufferGetBytesPerRow(frontImageBuffer)*CVPixelBufferGetHeight(frontImageBuffer);
CVPixelBufferLockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, CVPixelBufferGetBaseAddress(frontImageBuffer), numBytes, NULL);
frontImageFromSample = CGImageCreate (CVPixelBufferGetWidth(frontImageBuffer) , CVPixelBufferGetHeight(frontImageBuffer), 8, 32, CVPixelBufferGetBytesPerRow(frontImageBuffer), outputColorSpace, (CGBitmapInfo) kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
CGContextDrawImage ( outputContext, inrect, frontImageFromSample);
CVPixelBufferUnlockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGImageRelease(frontImageFromSample);
}
有什么想法吗?
【问题讨论】:
-
如果您正在寻找一种可能更好的方式来在您的应用程序包中存储 alpha 视频,请查看:stackoverflow.com/a/21079559/763355
-
是的,我已经看到将带有 Alpha 的视频分成两个视频的示例,一种颜色 + 一种灰度,但我决定不这样做。我的实现背后的想法是使用 GPU 对视频进行编码,使用 CPU 对掩码进行编码。这让我甚至可以通过相机实时执行此操作...