【问题标题】:Render off-screen SCNScene into UIImage将屏幕外的 SCNScene 渲染到 UIImage 中
【发布时间】:2014-08-26 16:04:38
【问题描述】:

如何将 屏幕外 SCNScene 渲染成 UIImage

我知道SCNView 提供了-snapshot 方法,但不幸的是,这不适用于屏幕外视图。之前有人问过similar question,其中一个答案建议使用glReadPixels 从 OpenGL 读取位图数据,但这种方法不适用于离屏场景。

我尝试使用SCNRenderer 渲染到GLKView 的上下文中,但没有成功。

【问题讨论】:

  • 重复的问题既有针对优胜美地之前的 OS X 的答案,也有针对 iOS 8 和 OS X 10.10 的答案。
  • @DavidRönnqvist 这不是真的。答案是屏幕上的SCNViews,正如我所说,我需要在 iOS 上将场景渲染到屏幕外。 -snapshot 方法不适用于屏幕外视图。
  • 据我所知,glReadPixels 应该仍然是屏幕外 OpenGL 的有效方法。
  • glReadPixels 也不适用于SCNView
  • 我重新打开了您的问题并对其进行了编辑以突出显示您已经尝试过的内容。我仍然认为看看你为场景所做的一些设置以及一些失败的方法是很有价值的。

标签: ios opengl-es scenekit


【解决方案1】:

带有 SCNRenderer 的 Swift 4:

您可以使用 SCNRenderer 的快照方法将屏幕外的 SCNScene 轻松渲染到 UIImage。

这里有一些警告,这使用金属。我不知道设备/iOS 版本的截止日期在哪里,但您需要更新的设备。您也将无法在模拟器上运行它。

第 1 步 - 像往常一样设置场景:

// Set up your scene which won't be displayed
let hiddenScene = SCNScene()
[insert code to set up your nodes, cameras, and lights here]

第 2 步 - 设置 SCNRenderer -- 模拟器上的渲染器将为零:

// Set up the renderer -- this returns nil on simulator
let renderer = SCNRenderer(device: MTLCreateSystemDefaultDevice(), options: nil)
renderer!.scene = hiddenScene

第 3 步 - 将场景渲染到 UIImage:

// You can use zero for renderTime unless you are using animations,
// in which case, renderTime should be the current scene time.
let renderTime = TimeInterval(0)

// Output size
let size = CGSize(width:300, height: 150)

// Render the image
let image = renderer!.snapshot(atTime: renderTime, with: size,
                antialiasingMode: SCNAntialiasingMode.multisampling4X)

如果您正在运行动画,则需要增加 renderTime 或将其设置为您要渲染的时间索引。例如,如果您想将帧渲染到场景中 4 秒,您可以将其设置为 4。这只会影响动画 - 它不会及时返回并显示场景的历史视图。

例如,如果您使用 SCNNode.runAction 运行正在运行的动画,您可能希望每 60 秒(0.16667 秒)增加一次渲染时间,这样每当您决定渲染时,您就会获得更新的渲染时间:

var timer : Timer
var renderTime = TimeInterval(0)

timer = Timer.scheduledTimer(withTimeInterval: 0.016667, repeats: true, block: { (t) in 
        self?.renderTime += 0.016667
    }   
})

不过,使用 CADisplayLink 可能是更好的计时解决方案。

这是一个非常快速和肮脏的implementation example

【讨论】:

    【解决方案2】:

    这是我写的一个小代码。你可以用它来记笔记。

    public extension SCNRenderer {
    
        public func renderToImageSize(size: CGSize, floatComponents: Bool, atTime time: NSTimeInterval) -> CGImage? {
    
            var thumbnailCGImage: CGImage?
    
            let width = GLsizei(size.width), height = GLsizei(size.height)
            let samplesPerPixel = 4
    
            #if os(iOS)
                let oldGLContext = EAGLContext.currentContext()
                let glContext = unsafeBitCast(context, EAGLContext.self)
    
                EAGLContext.setCurrentContext(glContext)
                objc_sync_enter(glContext)
            #elseif os(OSX)
                let oldGLContext = CGLGetCurrentContext()
                let glContext = unsafeBitCast(context, CGLContextObj.self)
    
                CGLSetCurrentContext(glContext)
                CGLLockContext(glContext)
            #endif
    
            // set up the OpenGL buffers
            var thumbnailFramebuffer: GLuint = 0
            glGenFramebuffers(1, &thumbnailFramebuffer)
            glBindFramebuffer(GLenum(GL_FRAMEBUFFER), thumbnailFramebuffer); checkGLErrors()
    
            var colorRenderbuffer: GLuint = 0
            glGenRenderbuffers(1, &colorRenderbuffer)
            glBindRenderbuffer(GLenum(GL_RENDERBUFFER), colorRenderbuffer)
            if floatComponents {
                glRenderbufferStorage(GLenum(GL_RENDERBUFFER), GLenum(GL_RGBA16F), width, height)
            } else {
                glRenderbufferStorage(GLenum(GL_RENDERBUFFER), GLenum(GL_RGBA8), width, height)
            }
            glFramebufferRenderbuffer(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_RENDERBUFFER), colorRenderbuffer); checkGLErrors()
    
            var depthRenderbuffer: GLuint = 0
            glGenRenderbuffers(1, &depthRenderbuffer)
            glBindRenderbuffer(GLenum(GL_RENDERBUFFER), depthRenderbuffer)
            glRenderbufferStorage(GLenum(GL_RENDERBUFFER), GLenum(GL_DEPTH_COMPONENT24), width, height)
            glFramebufferRenderbuffer(GLenum(GL_FRAMEBUFFER), GLenum(GL_DEPTH_ATTACHMENT), GLenum(GL_RENDERBUFFER), depthRenderbuffer); checkGLErrors()
    
            let framebufferStatus = Int32(glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER)))
            assert(framebufferStatus == GL_FRAMEBUFFER_COMPLETE)
            if framebufferStatus != GL_FRAMEBUFFER_COMPLETE {
                return nil
            }
    
            // clear buffer
            glViewport(0, 0, width, height)
            glClear(GLbitfield(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)); checkGLErrors()
    
            // render
            renderAtTime(time); checkGLErrors()
    
            // create the image
            if floatComponents { // float components (16-bits of actual precision)
    
                // slurp bytes out of OpenGL
                typealias ComponentType = Float
    
                var imageRawBuffer = [ComponentType](count: Int(width * height) * samplesPerPixel * sizeof(ComponentType), repeatedValue: 0)
                glReadPixels(GLint(0), GLint(0), width, height, GLenum(GL_RGBA), GLenum(GL_FLOAT), &imageRawBuffer)
    
                // flip image vertically — OpenGL has a different 'up' than CoreGraphics
                let rowLength = Int(width) * samplesPerPixel
                for rowIndex in 0..<(Int(height) / 2) {
                    let baseIndex = rowIndex * rowLength
                    let destinationIndex = (Int(height) - 1 - rowIndex) * rowLength
    
                    swap(&imageRawBuffer[baseIndex..<(baseIndex + rowLength)], &imageRawBuffer[destinationIndex..<(destinationIndex + rowLength)])
                }
    
                // make the CGImage
                var imageBuffer = vImage_Buffer(
                    data: UnsafeMutablePointer<Float>(imageRawBuffer),
                    height: vImagePixelCount(height),
                    width: vImagePixelCount(width),
                    rowBytes: Int(width) * sizeof(ComponentType) * samplesPerPixel)
    
                var format = vImage_CGImageFormat(
                    bitsPerComponent: UInt32(sizeof(ComponentType) * 8),
                    bitsPerPixel: UInt32(sizeof(ComponentType) * samplesPerPixel * 8),
                    colorSpace: nil, // defaults to sRGB
                    bitmapInfo: CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue | CGBitmapInfo.FloatComponents.rawValue),
                    version: UInt32(0),
                    decode: nil,
                    renderingIntent: kCGRenderingIntentDefault)
    
                var error: vImage_Error = 0
                thumbnailCGImage = vImageCreateCGImageFromBuffer(&imageBuffer, &format, nil, nil, vImage_Flags(kvImagePrintDiagnosticsToConsole), &error)!.takeRetainedValue()
    
            } else { // byte components
    
                // slurp bytes out of OpenGL
                typealias ComponentType = UInt8
    
                var imageRawBuffer = [ComponentType](count: Int(width * height) * samplesPerPixel * sizeof(ComponentType), repeatedValue: 0)
                glReadPixels(GLint(0), GLint(0), width, height, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), &imageRawBuffer)
    
                // flip image vertically — OpenGL has a different 'up' than CoreGraphics
                let rowLength = Int(width) * samplesPerPixel
                for rowIndex in 0..<(Int(height) / 2) {
                    let baseIndex = rowIndex * rowLength
                    let destinationIndex = (Int(height) - 1 - rowIndex) * rowLength
    
                    swap(&imageRawBuffer[baseIndex..<(baseIndex + rowLength)], &imageRawBuffer[destinationIndex..<(destinationIndex + rowLength)])
                }
    
                // make the CGImage
                var imageBuffer = vImage_Buffer(
                    data: UnsafeMutablePointer<Float>(imageRawBuffer),
                    height: vImagePixelCount(height),
                    width: vImagePixelCount(width),
                    rowBytes: Int(width) * sizeof(ComponentType) * samplesPerPixel)
    
                var format = vImage_CGImageFormat(
                    bitsPerComponent: UInt32(sizeof(ComponentType) * 8),
                    bitsPerPixel: UInt32(sizeof(ComponentType) * samplesPerPixel * 8),
                    colorSpace: nil, // defaults to sRGB
                    bitmapInfo: CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue | CGBitmapInfo.ByteOrder32Big.rawValue),
                    version: UInt32(0),
                    decode: nil,
                    renderingIntent: kCGRenderingIntentDefault)
    
                var error: vImage_Error = 0
                thumbnailCGImage = vImageCreateCGImageFromBuffer(&imageBuffer, &format, nil, nil, vImage_Flags(kvImagePrintDiagnosticsToConsole), &error)!.takeRetainedValue()
            }
    
            #if os(iOS)
                objc_sync_exit(glContext)
                if oldGLContext != nil {
                    EAGLContext.setCurrentContext(oldGLContext)
                }
            #elseif os(OSX)
                CGLUnlockContext(glContext)
                if oldGLContext != nil {
                    CGLSetCurrentContext(oldGLContext)
                }
            #endif
    
            return thumbnailCGImage
        }
    }
    
    
    func checkGLErrors() {
        var glError: GLenum
        var hadError = false
        do {
            glError = glGetError()
            if glError != 0 {
                println(String(format: "OpenGL error %#x", glError))
                hadError = true
            }
        } while glError != 0
        assert(!hadError)
    }
    

    【讨论】:

    • 当我在 Xcode9 中尝试这段代码时,我遇到了很多编译错误。我花了一些时间来解决它们,但仍然有一个编译错误没有解决。您是否有关于您的代码的更新版本。谢谢!!!
    • 我没有对此版本进行任何更新,因为我多年前放弃了 GL/EAGL。不过,我有一个适用于 Metal 的版本。
    • 那么您介意在此处发布您的 Metal 版本吗?非常感谢!
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2011-04-19
    • 1970-01-01
    • 2016-04-09
    • 2016-06-25
    • 1970-01-01
    相关资源
    最近更新 更多