【问题标题】:Is programmatically inverting the colors of an image possible?是否可以以编程方式反转图像的颜色?
【发布时间】:2011-07-12 23:40:20
【问题描述】:

我想在 iOS 中拍照并反转颜色。

【问题讨论】:

    标签: iphone ios ipad uiimage


    【解决方案1】:

    为了扩展 quixoto 的回答,并且因为我有来自我自己的项目的相关源代码,如果您需要使用 CPU 上的像素操作,那么我添加了说明的以下内容应该执行技巧:

    @implementation UIImage (NegativeImage)
    
    - (UIImage *)negativeImage
    {
        // get width and height as integers, since we'll be using them as
        // array subscripts, etc, and this'll save a whole lot of casting
        CGSize size = self.size;
        int width = size.width;
        int height = size.height;
    
        // Create a suitable RGB+alpha bitmap context in BGRA colour space
        CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
        unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
        CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
        CGColorSpaceRelease(colourSpace);
    
        // draw the current image to the newly created context
        CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
    
        // run through every pixel, a scan line at a time...
        for(int y = 0; y < height; y++)
        {
            // get a pointer to the start of this scan line
            unsigned char *linePointer = &memoryPool[y * width * 4];
    
            // step through the pixels one by one...
            for(int x = 0; x < width; x++)
            {
                // get RGB values. We're dealing with premultiplied alpha
                // here, so we need to divide by the alpha channel (if it
                // isn't zero, of course) to get uninflected RGB. We
                // multiply by 255 to keep precision while still using
                // integers
                int r, g, b; 
                if(linePointer[3])
                {
                    r = linePointer[0] * 255 / linePointer[3];
                    g = linePointer[1] * 255 / linePointer[3];
                    b = linePointer[2] * 255 / linePointer[3];
                }
                else
                    r = g = b = 0;
    
                // perform the colour inversion
                r = 255 - r;
                g = 255 - g;
                b = 255 - b;
    
                // multiply by alpha again, divide by 255 to undo the
                // scaling before, store the new values and advance
                // the pointer we're reading pixel data from
                linePointer[0] = r * linePointer[3] / 255;
                linePointer[1] = g * linePointer[3] / 255;
                linePointer[2] = b * linePointer[3] / 255;
                linePointer += 4;
            }
        }
    
        // get a CG image from the context, wrap that into a
        // UIImage
        CGImageRef cgImage = CGBitmapContextCreateImage(context);
        UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
    
        // clean up
        CGImageRelease(cgImage);
        CGContextRelease(context);
        free(memoryPool);
    
        // and return
        return returnImage;
    }
    
    @end
    

    这样就为 UIImage 添加了一个类别方法:

    1. 创建一个清晰的 CoreGraphics 位图上下文,它可以访问它的内存
    2. 向其绘制 UIImage
    3. 遍历每个像素,从预乘的 Alpha 转换为未变形的 RGB,分别反转每个通道,再次乘以 Alpha 并存储回来
    4. 从上下文中获取图像并将其包装到 UIImage 中
    5. 自行清理,并返回 UIImage

    【讨论】:

    • 感谢您提供此代码。我怎样才能让这个代码与视网膜设备一起工作?例如if (retinaScale>0.0) { UIGraphicsBeginImageContextWithOptions(image.size, NO,retinaScale); } 其他 { UIGraphicsBeginImageContext(image.size); }
    • @elprl 这在视网膜设备和非视网膜设备上应该没有任何区别——它直接在 UIImage 上运行,并且必须处理显示的实际实用性是 UIImageView。
    • 要使此代码支持 Retina,请使用:int width = size.width * self.scale; int height = size.height * self.scale;
    • 嗨,Tommy,我需要为此设置 alpha 颜色 ..此代码有效,而不是白色,我想放置透明颜色,所以我猜如果我们设置 alpha 颜色可能会有效... . 你能建议我怎么做吗?提前感谢@Tommy
    • @VincentTourraine 并用[UIImage imageWithCGImage:cgImage scale:self.scale orientation:UIImageOrientationUp]替换UIImage *returnImage = [UIImage imageWithCGImage:cgImage]
    【解决方案2】:

    使用 CoreImage:

    #import <CoreImage/CoreImage.h>
    
    @implementation UIImage (ColorInverse)
    
    + (UIImage *)inverseColor:(UIImage *)image {
        CIImage *coreImage = [CIImage imageWithCGImage:image.CGImage];
        CIFilter *filter = [CIFilter filterWithName:@"CIColorInvert"];
        [filter setValue:coreImage forKey:kCIInputImageKey];
        CIImage *result = [filter valueForKey:kCIOutputImageKey];
        return [UIImage imageWithCIImage:result];
    }
    
    @end
    

    【讨论】:

    • 最后一行会更好:[UIImage imageWithCIImage:result scale:image.scaleorientation:image.imageOrientation];以便保留原始图像的比例和方向。
    • 这个怎么用/叫这个?
    【解决方案3】:

    当然,有可能——一种方法是使用“差异”混合模式 (kCGBlendModeDifference)。请参阅this question(以及其他)了解设置图像处理的代码大纲。使用您的图像作为底部(基础)图像,然后在其上绘制纯白色位图。

    您还可以手动执行逐像素操作,方法是获取CGImageRef 并将其绘制到位图上下文中,然后在位图上下文中循环遍历像素。

    【讨论】:

    • 请提供可以替换@Tommy 的答案的代码(我现在正在使用)。
    • 很好,很好的解决方案。
    【解决方案4】:

    Swift 3 更新: (来自@BadPirate 答案)

    extension UIImage {
    func inverseImage(cgResult: Bool) -> UIImage? {
        let coreImage = UIKit.CIImage(image: self)
        guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
        if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
            return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
        }
        return UIImage(ciImage: result)
      }
    }
    

    【讨论】:

      【解决方案5】:

      Tommy 的答案就是答案,但我想指出,对于更大的图像来说,这可能是一项非常紧张且耗时的任务。有两个框架可以帮助您处理图像:

      1. CoreImage
      2. 加速器

        值得一提的是来自 Brad Larson 的令人惊叹的 GPUImage 框架,GPUImage 使例程在 OpenGL 2.0 环境下使用自定义片段着色器在 GPU 上运行,具有显着的速度提升。使用 CoreImge,如果有可用的负过滤器,您可以选择 CPU 或 GPU,使用 Accelerator,所有例程都在 CPU 上运行,但使用矢量数学图像处理。

      【讨论】:

      【解决方案6】:

      为此创建了一个快速扩展。此外,由于基于 CIImage 的 UIImage 发生故障(大多数库假设设置了 CGImage),我添加了一个选项来返回基于修改后的 CIImage 的 UIImage:

      extension UIImage {
          func inverseImage(cgResult: Bool) -> UIImage? {
              let coreImage = UIKit.CIImage(image: self)
              guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
              filter.setValue(coreImage, forKey: kCIInputImageKey)
              guard let result = filter.valueForKey(kCIOutputImageKey) as? UIKit.CIImage else { return nil }
              if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
                  return UIImage(CGImage: CIContext(options: nil).createCGImage(result, fromRect: result.extent))
              }
              return UIImage(CIImage: result)
          }
      }
      

      【讨论】:

        【解决方案7】:

        更新到@MLBDG 答案的Swift 5 版本

        extension UIImage {
            func inverseImage(cgResult: Bool) -> UIImage? {
                let coreImage = self.ciImage
                guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
                filter.setValue(coreImage, forKey: kCIInputImageKey)
                guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
                if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
                    return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
                }
                return UIImage(ciImage: result)
            }
        }
        

        【讨论】:

          猜你喜欢
          • 1970-01-01
          • 2014-04-01
          • 1970-01-01
          • 1970-01-01
          • 2023-03-31
          • 1970-01-01
          • 2015-04-07
          • 2011-01-16
          • 1970-01-01
          相关资源
          最近更新 更多