【问题标题】:iPhone Watermark on recorded Video.录制视频上的 iPhone 水印。
【发布时间】:2011-11-04 13:41:30
【问题描述】:

在我的应用程序中,我需要捕获视频并在该视频上添加水印。水印应该是文本(时间和注释)。我看到了一个使用“QTKit”框架的代码。但是我读到该框架不适用于 iPhone。

提前致谢。

【问题讨论】:

  • 致:谁需要有关此主题的更多信息。我知道这个问题很老,但为了提供更多信息 - 请参阅这篇文章 (stackoverflow.com/a/21886295/894671)
  • @GuntisTreulands 感谢您添加更多信息,希望这对大家有所帮助..
  • @DilipRajkumar 你能建议我如何为 CATextLayer 设置合适的框架吗?
  • @DipenChudasama,对不起。目前我没有做任何iOS开发。所以我真的忘记了该怎么做。希望有人能帮忙..
  • 好的NP,解决问题,感谢您的回复。

标签: iphone watermark video-watermarking


【解决方案1】:

添加水印要简单得多。您只需要使用 CALayer 和 AVVideoCompositionCoreAnimationTool。代码可以按照相同的顺序复制和组装。我刚刚尝试在两者之间插入一些 cmets 以便更好地理解。

假设您已经录制了视频,因此我们将首先创建 AVURLAsset:

AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:outputFileURL options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];

AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo  preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) 
                               ofTrack:clipVideoTrack
                                atTime:kCMTimeZero error:nil];

[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]]; 

仅使用此代码,您就可以导出视频,但我们想先添加带有水印的图层。请注意,有些代码可能看起来是多余的,但它是一切正常工作所必需的。

首先我们创建带有水印图像的图层:

UIImage *myImage = [UIImage imageNamed:@"icon.png"];
CALayer *aLayer = [CALayer layer];
aLayer.contents = (id)myImage.CGImage;
aLayer.frame = CGRectMake(5, 25, 57, 57); //Needed for proper display. We are using the app icon (57x57). If you use 0,0 you will not see it
aLayer.opacity = 0.65; //Feel free to alter the alpha here

如果我们不想要图像而想要文本:

CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = @"Text goes here";
titleLayer.font = @"Helvetica";
titleLayer.fontSize = videoSize.height / 6;
//?? titleLayer.shadowOpacity = 0.5;
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, videoSize.width, videoSize.height / 6); //You may need to adjust this for proper display

以下代码按正确顺序对图层进行排序:

CGSize videoSize = [videoAsset naturalSize]; 
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];   
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:aLayer];
[parentLayer addSublayer:titleLayer]; //ONLY IF WE ADDED TEXT

现在我们正在创建合成并添加插入图层的说明:

AVMutableVideoComposition* videoComp = [[AVMutableVideoComposition videoComposition] retain];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

/// instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];

现在我们准备好导出了:

_assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];//AVAssetExportPresetPassthrough   
_assetExport.videoComposition = videoComp;

NSString* videoName = @"mynewwatermarkedvideo.mov";

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL    *exportUrl = [NSURL fileURLWithPath:exportPath];

if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath]) 
{
    [[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}

_assetExport.outputFileType = AVFileTypeQuickTimeMovie; 
_assetExport.outputURL = exportUrl;
_assetExport.shouldOptimizeForNetworkUse = YES;

[strRecordedFilename setString: exportPath];

[_assetExport exportAsynchronouslyWithCompletionHandler:
 ^(void ) {
     [_assetExport release];
     //YOUR FINALIZATION CODE HERE
 }       
 ];   

[audioAsset release];
[videoAsset release];

【讨论】:

  • 谢谢 Julio.. 我已经删除了我的应用程序的功能。这段代码真的会帮助一些人。如果我再次执行此功能,我将使用您的代码。我一定会帮助一些人。非常感谢..
  • 没问题。很高兴能够提供帮助:)
  • 我发现这个方法的一个问题是,如果应用程序是后台的,它会崩溃。
  • 你们有没有人让文本层工作?我试过但无法显示文字。请参阅我的问题:stackoverflow.com/questions/10281872/…
  • 当我使用 uiimagepickercontroller 录制视频并使用上述代码时,它会将我的视频旋转为横向。我检查了我们是否直接保存到相册,它被正确保存,类似于默认相机录制的视频,但在应用此代码后,它以横向模式保存。有什么帮助吗?
【解决方案2】:

使用AVFoundation。我建议使用AVCaptureVideoDataOutput 抓取帧,然后用水印图像覆盖捕获的帧,最后将捕获和处理的帧写入文件用户AVAssetWriter

搜索堆栈溢出,有大量出色的示例详细说明了如何执行我提到的每一项操作。我还没有看到任何提供代码示例的代码示例来完全符合您想要的效果,但您应该能够很容易地混合和匹配。

编辑:

看看这些链接:

iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - 这篇文章可能会因为包含相关代码的性质而有所帮助。

AVCaptureDataOutput 将返回图像为CMSampleBufferRefs。 使用以下代码将它们转换为CGImageRefs:

    - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

从那里您将转换为 UIImage,

  UIImage *img = [UIImage imageWithCGImage:yourCGImage];  

然后使用

[img drawInRect:CGRectMake(x,y,height,width)]; 

要将帧绘制到上下文,在其上绘制水印的 PNG,然后使用 AVAssetWriter 将处理后的图像添加到您的输出视频中。我建议实时添加它们,这样你就不会用大量的 UIImages 填满内存。

How do I export UIImage array as a movie? - 这篇博文展示了如何将您处理的 UIImages 添加到给定持续时间的视频中。

这应该可以帮助您顺利地为视频添加水印。请记住练习良好的内存管理,因为以 20-30fps 的速度输入的泄漏图像是导致应用崩溃的好方法。

【讨论】:

  • 谢谢詹姆斯,如果你能给我一个开始,那就太好了。再次感谢。
  • 在上面查看我的其他 cmets。
  • 您有机会尝试一下吗?运气好吗?
  • @James 你能建议我如何为 CATextLayer 设置合适的框架吗? stackoverflow.com/questions/31780060/…
  • @James 如何在特定时间添加水印我的视频是 60 秒。我想添加 10 到 50 秒的水印。请帮帮我。
【解决方案3】:

@Julio 给出的答案在 Objective-c 的情况下已经可以正常工作了 以下是 Swift 3.0 的相同代码库:

WATERMARK 和生成像 Instagram 这样的 SQUARE 或 CROPPED 视频

从 Documents Directory 中获取输出文件并创建 AVURLAsset

    //output file
    let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
    let outputPath = documentsURL?.appendingPathComponent("squareVideo.mov")
    if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
        do {
           try FileManager.default.removeItem(atPath: (outputPath?.path)!)
        }
        catch {
            print ("Error deleting file")
        }
    }



    //input file
    let asset = AVAsset.init(url: filePath)
    print (asset)
    let composition = AVMutableComposition.init()
    composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)

    //input clip
    let clipVideoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]

使用水印图像创建图层:

    //adding the image layer
    let imglogo = UIImage(named: "video_button")
    let watermarkLayer = CALayer()
    watermarkLayer.contents = imglogo?.cgImage
    watermarkLayer.frame = CGRect(x: 5, y: 25 ,width: 57, height: 57)
    watermarkLayer.opacity = 0.85

使用文本作为水印而不是图像创建图层:

    let textLayer = CATextLayer()
    textLayer.string = "Nodat"
    textLayer.foregroundColor = UIColor.red.cgColor
    textLayer.font = UIFont.systemFont(ofSize: 50)
    textLayer.alignmentMode = kCAAlignmentCenter
    textLayer.bounds = CGRect(x: 5, y: 25, width: 100, height: 20)

以正确的顺序在视频上添加图层以添加水印

  let videoSize = clipVideoTrack.naturalSize
    let parentlayer = CALayer()
    let videoLayer = CALayer()

    parentlayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    parentlayer.addSublayer(videoLayer)
    parentlayer.addSublayer(watermarkLayer)
    parentlayer.addSublayer(textLayer) //for text layer only

以正方形格式裁剪视频 - 大小为 300*300

 //make it square
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = CGSize(width: 300, height: 300) //change it as per your needs.
    videoComposition.frameDuration = CMTimeMake(1, 30)
    videoComposition.renderScale = 1.0

    //Magic line for adding watermark to the video
    videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer], in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30))

旋转到纵向

//rotate to potrait
    let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
    let t1 = CGAffineTransform(translationX: clipVideoTrack.naturalSize.height, y: -(clipVideoTrack.naturalSize.width - clipVideoTrack.naturalSize.height) / 2)
    let t2: CGAffineTransform = t1.rotated(by: .pi/2)
    let finalTransform: CGAffineTransform = t2
    transformer.setTransform(finalTransform, at: kCMTimeZero)
    instruction.layerInstructions = [transformer]
    videoComposition.instructions = [instruction]

导出视频的最后一步

        let exporter = AVAssetExportSession.init(asset: asset, presetName: AVAssetExportPresetMediumQuality)
    exporter?.outputFileType = AVFileTypeQuickTimeMovie
    exporter?.outputURL = outputPath
    exporter?.videoComposition = videoComposition

    exporter?.exportAsynchronously() { handler -> Void in
        if exporter?.status == .completed {
            print("Export complete")
            DispatchQueue.main.async(execute: {
                completion(outputPath)
            })
            return
        } else if exporter?.status == .failed {
            print("Export failed - \(String(describing: exporter?.error))")
        }
        completion(nil)
        return
    }

这会将带有水印的方形视频导出为文本或图像

谢谢

【讨论】:

  • 谢谢,但这段代码显示视频旋转和扭曲!
  • 由于某种原因,导出速度太慢。只有当有 videoComposition 时才会发生这种情况
【解决方案4】:

只需下载代码并使用它。它位于 Apple 开发者文档页面中。

http://developer.apple.com/library/ios/#samplecode/AVSimpleEditoriOS/Listings/AVSimpleEditor_AVSERotateCommand_m.html

【讨论】:

    【解决方案5】:

    这里是the example on swift3 如何在录制的视频中插入动画(图像/幻灯片/帧数组)和静态图像水印。

    它使用 CAKeyframeAnimation 对帧进行动画处理,它使用 AVMutableCompositionTrackAVAssetExportSessionAVMutableVideoComposition 以及 AVMutableVideoCompositionInstruction将所有内容组合在一起。

    【讨论】:

      【解决方案6】:

      通过使用mikitamanko's blog 中的快速示例代码向视频添加 CALayer 我做了一些小的更改以修复以下错误:

      Error Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The video could not be composed., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x2830559b0 {Error Domain=NSOSStatusErrorDomain Code=-17390 "(null)"}}
      

      解决方案是在设置图层指令时使用合成的视频轨道而不是原始视频轨道,如下面的swift 5代码:

          static func addSketchLayer(url: URL, sketchLayer: CALayer, block: @escaping (Result<URL, VideoExportError>) -> Void) {
              let composition = AVMutableComposition()
              let vidAsset = AVURLAsset(url: url)
              
              let videoTrack = vidAsset.tracks(withMediaType: AVMediaType.video)[0]
              let duration = vidAsset.duration
              let vid_timerange = CMTimeRangeMake(start: CMTime.zero, duration: duration)
              
              let videoRect = CGRect(origin: .zero, size: videoTrack.naturalSize)
              let transformedVideoRect = videoRect.applying(videoTrack.preferredTransform)
              let size = transformedVideoRect.size
                      
              let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
              
              try? compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: CMTime.zero)
              compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
              
              let videolayer = CALayer()
              videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
              videolayer.opacity = 1.0
              sketchLayer.contentsScale = 1
              
              let parentlayer = CALayer()
              parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
              sketchLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
              parentlayer.addSublayer(videolayer)
              parentlayer.addSublayer(sketchLayer)
              
              let layercomposition = AVMutableVideoComposition()
              layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
              layercomposition.renderScale = 1.0
              layercomposition.renderSize = CGSize(width: size.width, height: size.height)
      
              layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videolayer], in: parentlayer)
              
              let instruction = AVMutableVideoCompositionInstruction()
              instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
              let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionvideoTrack)
              layerinstruction.setTransform(compositionvideoTrack.preferredTransform, at: CMTime.zero)
              instruction.layerInstructions = [layerinstruction] as [AVVideoCompositionLayerInstruction]
              layercomposition.instructions = [instruction] as [AVVideoCompositionInstructionProtocol]
              
              let compositionAudioTrack:AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
              let audioTracks = vidAsset.tracks(withMediaType: AVMediaType.audio)
              for audioTrack in audioTracks {
                  try? compositionAudioTrack?.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: CMTime.zero)
              }
              
              let movieDestinationUrl = URL(fileURLWithPath: NSTemporaryDirectory() + "/exported.mp4")
              try? FileManager().removeItem(at: movieDestinationUrl)
              
              let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)!
              assetExport.outputFileType = AVFileType.mp4
              assetExport.outputURL = movieDestinationUrl
              assetExport.videoComposition = layercomposition
              
              assetExport.exportAsynchronously(completionHandler: {
                  switch assetExport.status {
                  case AVAssetExportSessionStatus.failed:
                      print(assetExport.error ?? "unknown error")
                      block(.failure(.failed))
                  case AVAssetExportSessionStatus.cancelled:
                      print(assetExport.error ?? "unknown error")
                      block(.failure(.canceled))
                  default:
                      block(.success(movieDestinationUrl))
                  }
              })
          }
      
      enum VideoExportError: Error {
          case failed
          case canceled
      }
      

      请注意,根据AVFoundation Crash on Exporting Video With Text Layer,此代码仅在模拟器上崩溃,但在真实设备上有效

      另请注意,在应用首选视频变换后会使用宽度和高度。

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 2016-03-25
        • 2017-05-29
        • 1970-01-01
        • 1970-01-01
        相关资源
        最近更新 更多