【问题标题】:glReadPixels returns zeroes with multi-samplingglReadPixels 通过多重采样返回零
【发布时间】:2013-06-06 00:31:52
【问题描述】:

我正在为 iOS 编写 OpenGL 应用程序,我需要对渲染场景进行应用内截图。当我不使用多重采样时,一切正常。但是当我打开多重采样时,glReadPixels 不会返回正确的数据(场景绘制正确 - 多重采样的图形质量要好得多)。

我已经在 SO 和其他一些地方检查了一堆类似的问题,但没有一个能解决我的问题,因为我已经在按照建议的方式做:

  1. 我在解析缓冲区之后、呈现缓冲区之前截屏。
  2. glReadPixels 不返回错误。
  3. 我什至尝试将 kEAGLDrawablePropertyRetainedBacking 设置为 YES 并在出现缓冲区后截屏 - 也不起作用。
  4. 我支持 OpenGLES 1.x 渲染 API(使用 kEAGLRenderingAPIOpenGLES1 初始化的上下文)

基本上我不知道有什么问题。在 SO 上发布问题是我最后的手段。

这是相关的源代码:

创建帧缓冲区

- (BOOL)createFramebuffer
{

    glGenFramebuffersOES(1, &viewFramebuffer);
    glGenRenderbuffersOES(1, &viewRenderbuffer);

    glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer];
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);

    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

    // Multisample support

    glGenFramebuffersOES(1, &sampleFramebuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, sampleFramebuffer);

    glGenRenderbuffersOES(1, &sampleColorRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleColorRenderbuffer);
    glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGBA8_OES, backingWidth, backingHeight);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, sampleColorRenderbuffer);

    glGenRenderbuffersOES(1, &sampleDepthRenderbuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleDepthRenderbuffer);
    glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, sampleDepthRenderbuffer);

    // End of multisample support

    if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
        NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
        return NO;
    }

    return YES;
}

解析缓冲区部分并拍摄快照

    glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer);
    glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, sampleFramebuffer);
    glResolveMultisampleFramebufferAPPLE();
    [self checkGlError];

    //glFinish();

    if (capture)
        captureImage = [self snapshot:self];    

    const GLenum discards[]  = {GL_COLOR_ATTACHMENT0_OES,GL_DEPTH_ATTACHMENT_OES};
    glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE,2,discards);

    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);    

    [context presentRenderbuffer:GL_RENDERBUFFER_OES];    

快照方法(基本上是从苹果文档复制的)

- (UIImage*)snapshot:(UIView*)eaglview
{

    // Bind the color renderbuffer used to render the OpenGL ES view
    // If your application only creates a single color renderbuffer which is already bound at this point,
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.    
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);


    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
    NSInteger dataLength = width * height * 4;
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

    // Read pixel data from the framebuffer
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    [self checkGlError];
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
    [self checkGlError];

    // Create a CGImage with the pixel data
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
    // otherwise, use kCGImageAlphaPremultipliedLast
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                ref, NULL, true, kCGRenderingIntentDefault);

    // OpenGL ES measures data in PIXELS
    // Create a graphics context with the target size measured in POINTS
    NSInteger widthInPoints, heightInPoints;
    if (NULL != UIGraphicsBeginImageContextWithOptions) {
        // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
        // Set the scale parameter to your OpenGL ES view's contentScaleFactor
        // so that you get a high-resolution snapshot when its value is greater than 1.0
        CGFloat scale = eaglview.contentScaleFactor;
        widthInPoints = width / scale;
        heightInPoints = height / scale;
        UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
    }
    else {
        // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
        widthInPoints = width;
        heightInPoints = height;
        UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
    }

    CGContextRef cgcontext = UIGraphicsGetCurrentContext();

    // UIKit coordinate system is upside down to GL/Quartz coordinate system
    // Flip the CGImage by rendering it to the flipped bitmap context
    // The size of the destination area is measured in POINTS
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

    // Retrieve the UIImage from the current context
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    // Clean up
    free(data);
    CFRelease(ref);
    CFRelease(colorspace);
    CGImageRelease(iref);

    return image;
}

【问题讨论】:

    标签: ios opengl-es


    【解决方案1】:

    在将viewFramebuffer 绑定为绘制帧缓冲区并将sampleFramebuffer 绑定为读取帧缓冲区之后,您可以像往常一样通过执行glResolveMultisampleFramebufferAPPLE 来解析多重采样缓冲区。但是您是否还记得在glReadPixels 之前将viewFramebuffer 绑定为读取帧缓冲区(glBindFramebuffer(GL_READ_FRAMEBUFFER, viewFramebuffer))? glReadPixels 将始终从当前绑定的读取帧缓冲区中读取,如果您在多样本解析后未更改此绑定,则这仍将是多样本帧缓冲区,而不是默认帧缓冲区。

    我还发现你的glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer)-calls 很烦人,因为这并没有真正做任何有意义的事情,当前绑定的渲染缓冲区仅与处理渲染缓冲区的函数相关(实际上只有 glRenderbufferStorage)(但它也可能是ES 对它做了一些有意义的事情,并且绑定它是[context presentRenderbuffer:GL_RENDERBUFFER_OES] 工作所必需的)。但尽管如此,也许你认为这个绑定也控制了 glReadPixels 将从中读取的缓冲区,但 不是 的情况,它总是从当前的 framebuffer 边界中读取到GL_READ_FRAMEBUFFER

    【讨论】:

    • 感谢您的回答。我将在大约 12 小时内尝试使用计算机,如果它解决了我的问题,我将接受您的回答。多样本解析后我没有绑定帧缓冲区,所以你的回答是有道理的。出于某种原因,我认为 glResolveMultisampleFramebufferAPPLE 会自动执行此操作。 (我一直在寻找这种方法的文档,但没有运气)。关于 glBindRenderBufferOES 你可能是对的,但以上所有代码或多或少都是从苹果示例中复制粘贴的,所以我只是想安全起见:)
    猜你喜欢
    • 1970-01-01
    • 2013-07-29
    • 2010-10-20
    • 1970-01-01
    • 2020-03-08
    • 2019-09-03
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多