【发布时间】:2018-10-21 21:19:55
【问题描述】:
我正在尝试使用ARKit 和SceneKit 创建标尺应用程序。我决定根据测量距离以编程方式创建标尺图像。
这是我用来绘制标尺的扩展:
extension UIImage {
static let dashLineWidth: CGFloat = 2.0
static let dashDistance: CGFloat = 163.0 / 25.4
static let rulerFont: UIFont = .systemFont(ofSize: 15.0, weight: .regular)
static let attributes: [NSAttributedStringKey: Any] = [
NSAttributedStringKey.font: rulerFont,
NSAttributedStringKey.foregroundColor: UIColor.black
]
static func drawRuler(width: CGFloat) -> UIImage? {
let cm = width * 100 // width in centimeters
let size = CGSize(width: dashDistance * cm * 10, height: 50.0)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
let background = UIBezierPath(rect: CGRect(origin: .zero, size: size))
context.addPath(background.cgPath)
context.setFillColor(UIColor.white.cgColor)
context.fillPath()
var i: CGFloat = 0.0
var counter: Int = 0
while i < size.width {
let isLongDash = counter % 10 == 0
let isPartDash = counter % 5 == 0
let dashHeight: CGFloat = size.height * (isLongDash ? 0.25 : isPartDash ? 0.15 : 0.07)
UIColor.black.setFill()
UIRectFill(CGRect(x: i - dashLineWidth / 2, y: 0.0, width: dashLineWidth, height: dashHeight))
if isLongDash && counter != 0 {
let value = "\(counter / 10)"
let valueSize: CGSize = value.size(withAttributes: attributes)
value.draw(at: CGPoint(x: i - dashLineWidth / 2 - valueSize.width / 2, y: dashHeight + 5.0), withAttributes: attributes)
}
i += dashDistance
counter += 1
}
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
func crop(to width: CGFloat, initialWidth: CGFloat) -> UIImage? {
let rect = CGRect(x: 0, y: 0, width: (width / initialWidth) * size.width * scale, height: size.height * scale)
guard let croppedCGImage: CGImage = cgImage?.cropping(to: rect) else { return nil }
return UIImage(cgImage: croppedCGImage)
}
}
所以起初我只绘制一次 0.5 米的图像以获得更好的性能,然后每次只裁剪需要的部分以显示在SCNNode。
这是我在SCNNode 课堂上尝试的内容:
var ruler: SCNNode = initRuler()
var initialWidth: CGFloat = 0.5
var rulerImage: UIImage? = UIImage.drawRuler(width: initialWidth)
func updateRuler() {
guard let geometry = ruler.geometry as? SCNBox else {
fatalError("Geometry is not SCNBox")
}
let width = geometry.width // in meters
if width > initialWidth - 0.05 {
initialWidth += 0.5
rulerImage = UIImage.drawRuler(width: initialWidth)
}
guard let croppedImage = rulerImage?.crop(to: width, initialWidth: initialWidth) else { return }
let texture = SKTexture(image: croppedImage)
let material = SCNMaterial()
material.diffuse.contents = texture
geometry.materials = [material]
}
当SCNNode 的大小变大并且图像也变大时,一切正常。所以在 1.3 米左右我撞车了
validateTextureDimensions:759: 断言失败`MTLTextureDescriptor 宽度 (16501) 大于允许的最大尺寸 16384。'
任何帮助将不胜感激。我在想我是否可以将图像分成几部分,然后分配给材料。还是有其他方法可以做到这一点?
【问题讨论】:
-
看看这个,它可能是你可以使用的东西,或者在 SceneKit 中重新创建:cimgf.com/2011/03/01/subduing-catiledlayer
标签: ios swift textures scenekit