【问题标题】:After compressing my audio file, why can I not play the file?压缩我的音频文件后,为什么我不能播放文件?
【发布时间】:2018-08-15 07:04:40
【问题描述】:

音频文件使用AVAssetReader/AVAssetWriter缩小后无法播放

目前,整个函数执行良好,没有抛出任何错误。 由于某种原因,当我通过终端进入模拟器的文档目录时,音频文件将无法通过 iTunes 播放,并且尝试使用 quicktime 打开时出现错误“QuickTime Player 无法打开“test1.m4a”

有没有人专门研究这个领域并理解为什么这不起作用?

protocol FileConverterDelegate {
  func fileConversionCompleted()
}

class WKAudioTools: NSObject {

  var delegate: FileConverterDelegate?

  var url: URL?
  var assetReader: AVAssetReader?
  var assetWriter: AVAssetWriter?

  func convertAudio() {

    let documentDirectory = try! FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
    let exportURL = documentDirectory.appendingPathComponent(Assets.soundName1).appendingPathExtension("m4a")

    url = Bundle.main.url(forResource: Assets.soundName1, withExtension: Assets.mp3)

    guard let assetURL = url else { return }
    let asset = AVAsset(url: assetURL)

    //reader
    do {
      assetReader = try AVAssetReader(asset: asset)
    } catch let error {
      print("Error with reading >> \(error.localizedDescription)")
    }

    let assetReaderOutput = AVAssetReaderAudioMixOutput(audioTracks: asset.tracks, audioSettings: nil)
    //let assetReaderOutput = AVAssetReaderTrackOutput(track: track!, outputSettings: nil)

    guard let assetReader = assetReader else {
      print("reader is nil")
      return
    }

    if assetReader.canAdd(assetReaderOutput) == false {
      print("Can't add output to the reader ☹️")
      return
    }

    assetReader.add(assetReaderOutput)

    // writer
    do {
      assetWriter = try AVAssetWriter(outputURL: exportURL, fileType: .m4a)
    } catch let error {
      print("Error with writing >> \(error.localizedDescription)")
    }

    var channelLayout = AudioChannelLayout()

    memset(&channelLayout, 0, MemoryLayout.size(ofValue: channelLayout))
    channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

    // use different values to affect the downsampling/compression
    let outputSettings: [String: Any] = [AVFormatIDKey: kAudioFormatMPEG4AAC,
                                         AVSampleRateKey: 44100.0,
                                         AVNumberOfChannelsKey: 2,
                                         AVEncoderBitRateKey: 128000,
                                         AVChannelLayoutKey: NSData(bytes: &channelLayout, length:  MemoryLayout.size(ofValue: channelLayout))]

    let assetWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: outputSettings)

    guard let assetWriter = assetWriter else { return }

    if assetWriter.canAdd(assetWriterInput) == false {
      print("Can't add asset writer input ☹️")
      return
    }

    assetWriter.add(assetWriterInput)
    assetWriterInput.expectsMediaDataInRealTime = false

    // MARK: - File conversion
    assetWriter.startWriting()
    assetReader.startReading()

    let audioTrack = asset.tracks[0]

    let startTime = CMTime(seconds: 0, preferredTimescale: audioTrack.naturalTimeScale)

    assetWriter.startSession(atSourceTime: startTime)

    // We need to do this on another thread, so let's set up a dispatch group...
    var convertedByteCount = 0
    let dispatchGroup = DispatchGroup()

    let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
    //... and go
    dispatchGroup.enter()
    assetWriterInput.requestMediaDataWhenReady(on: mediaInputQueue) {
      while assetWriterInput.isReadyForMoreMediaData {
        let nextBuffer = assetReaderOutput.copyNextSampleBuffer()

        if nextBuffer != nil {
          assetWriterInput.append(nextBuffer!)  // FIXME: Handle this safely
          convertedByteCount += CMSampleBufferGetTotalSampleSize(nextBuffer!)
        } else {
          // done!
          assetWriterInput.markAsFinished()
          assetReader.cancelReading()
          dispatchGroup.leave()

          DispatchQueue.main.async {
            // Notify delegate that conversion is complete
            self.delegate?.fileConversionCompleted()
            print("Process complete ????")

            if assetWriter.status == .failed {
              print("Writing asset failed ☹️ Error: ", assetWriter.error)
            }
          }
          break
        }
      }
    }
  }
}

【问题讨论】:

  • 你能解释一下代码的用途吗?我看到您正在将 mp3 保存为 m4a,但其他事情也在发生,因为您不需要样本缓冲区的东西。
  • 我将此参考用于我的解决方案 >> gist.github.com/abeldomingues/fe8fa797fd55603f2f4a
  • 我的理解是样本缓冲区对于能够查看压缩进度很有用,但我不认为它是必需的。
  • 好吧,如果所有你想做的只是对mp3进行转码,那就太过分了......

标签: swift avfoundation core-audio avassetwriter avassetreader


【解决方案1】:

您需要在您的AVAssetWriter 上调用finishWriting 才能完整写入输出:

assetWriter.finishWriting {
    DispatchQueue.main.async {
        // Notify delegate that conversion is complete
        self.delegate?.fileConversionCompleted()
        print("Process complete ?")

        if assetWriter.status == .failed {
            print("Writing asset failed ☹️ Error: ", assetWriter.error)
        }
    }
}

如果exportURL在你开始转换之前已经存在,你应该删除它,否则转换会失败:

try! FileManager.default.removeItem(at: exportURL)

正如@matt 所指出的,当您可以使用AVAssetExportSession 更简单地进行转换时,为什么要使用缓冲区内容,以及当您可以以所需格式分发您自己的资产时,为什么还要转换它?

【讨论】:

  • 成功了!!非常感谢! ?。不幸的是,我不太了解这项技术,而且我找到的答案非常有限。样本缓冲区是否使压缩发生在数据包中?或者这完全不需要?我想做的只是在将声音文件传递给 Apple Watch 之前减少 27mb 的声音文件。现在,这段代码将它降低到 10mb。你认为AVAssetExportSession 会是一个更好的候选人吗,你有一个我可以查看的工作示例吗?再次感谢。
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2018-01-14
  • 2013-12-06
  • 1970-01-01
相关资源
最近更新 更多