【问题标题】:iOS: Overlay two images with Alpha offscreeniOS:在屏幕外使用 Alpha 覆盖两个图像
【发布时间】:2016-02-24 13:30:49
【问题描述】:

对不起,对于这个问题,我知道有一个类似的问题,但我无法得到答案。可能是我这边的一些愚蠢的错误;-)

我想在 iOS 上用 Alpha 覆盖两个图像。从两个视频中获取的图像,由 AssetReader 读取并存储在两个 CVPixelBuffer 中。我知道 Alpha 通道没有存储在视频中,所以我从第三个文件中获取它。所有数据看起来都很好。问题是覆盖,如果我在屏幕上使用 [CIContext drawImage] 一切都很好! 但是,如果我在屏幕外进行操作,因为视频格式与屏幕格式不同,我无法让它工作: 1. drawImage 确实有效,但仅在屏幕上 2. render:toCVPixelBuffer 有效,但忽略了 Alpha 3. CGContextDrawImage 似乎什么都不做(甚至没有错误信息)

那么有人可以告诉我哪里出了问题:

初始化: ...(之前有很多代码) 设置色彩空间和位图上下文

                   if(outputContext)
                    {
                        CGContextRelease(outputContext);
                        CGColorSpaceRelease(outputColorSpace);
                    }
                    outputColorSpace = CGColorSpaceCreateDeviceRGB();
                    outputContext =   CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);

... (后面有很多代码)

绘图:

CIImage *backImageFromSample;
CGImageRef frontImageFromSample;
CVImageBufferRef nextImageBuffer = myPixelBufferArray[0];
CMSampleBufferRef sampleBuffer = NULL;
CMSampleTimingInfo timingInfo;

//draw the frame
CGRect toRect;
toRect.origin.x = 0;
toRect.origin.y = 0;
toRect.size = videoFormatSize;

//background image always full size, this part seems to work
if(drawBack)
{
    CVPixelBufferLockBaseAddress( backImageBuffer,  kCVPixelBufferLock_ReadOnly );
    backImageFromSample = [CIImage imageWithCVPixelBuffer:backImageBuffer];
    [coreImageContext render:backImageFromSample toCVPixelBuffer:nextImageBuffer bounds:toRect colorSpace:rgbSpace];
    CVPixelBufferUnlockBaseAddress( backImageBuffer,  kCVPixelBufferLock_ReadOnly );
}
else
    [self clearBuffer:nextImageBuffer];
//Front image doesn't seem to do anything
if(drawFront)
{
    unsigned long int numBytes = CVPixelBufferGetBytesPerRow(frontImageBuffer)*CVPixelBufferGetHeight(frontImageBuffer);
    CVPixelBufferLockBaseAddress( frontImageBuffer,  kCVPixelBufferLock_ReadOnly );

    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, CVPixelBufferGetBaseAddress(frontImageBuffer), numBytes, NULL);
    frontImageFromSample = CGImageCreate (CVPixelBufferGetWidth(frontImageBuffer) , CVPixelBufferGetHeight(frontImageBuffer), 8, 32, CVPixelBufferGetBytesPerRow(frontImageBuffer), outputColorSpace, (CGBitmapInfo) kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
    CGContextDrawImage ( outputContext, inrect, frontImageFromSample);
    CVPixelBufferUnlockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
    CGImageRelease(frontImageFromSample);
}

有什么想法吗?

【问题讨论】:

  • 如果您正在寻找一种可能更好的方式来在您的应用程序包中存储 alpha 视频,请查看:stackoverflow.com/a/21079559/763355
  • 是的,我已经看到将带有 Alpha 的视频分成两个视频的示例,一种颜色 + 一种灰度,但我决定不这样做。我的实现背后的想法是使用 GPU 对视频进行编码,使用 CPU 对掩码进行编码。这让我甚至可以通过相机实时执行此操作...

标签: ios drawing alpha


【解决方案1】:

所以显然我应该停下来问有关 stackflow 的问题。每次经过数小时的调试后,我都会在不久之后自己找到答案。对此感到抱歉。问题出在初始化中,如果不先锁定地址 O_o,就无法执行 CVPixelBufferGetBaseAddress。地址变为 NULL,这似乎是允许的,然后采取的行动不做任何事情。所以正确的代码是:

                if(outputContext)
                {
                    CGContextRelease(outputContext);
                    CGColorSpaceRelease(outputColorSpace);
                }
                CVPixelBufferLockBaseAddress(pixelBuffer);
                outputColorSpace = CGColorSpaceCreateDeviceRGB();
                outputContext =   CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
                CVPixelBufferUnlockBaseAddress(pixelBuffer);

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2023-03-08
    • 2013-11-16
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多