【发布时间】:2013-07-30 21:26:59
【问题描述】:
我正在尝试从麦克风录制声音并在 OS X 上实时播放。最终它将通过网络流式传输,但现在我只是尝试实现本地录制/播放。
我可以录制声音并写入文件,AVCaptureSession 和AVAudioRecorder 都可以做到这一点。但是,我不确定如何在录制音频时播放音频。使用AVCaptureAudioDataOutput 有效:
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
self.serialQueue = dispatch_queue_create("audioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:self.serialQueue];
if (audioInput && [self.captureSession canAddInput:audioInput] && [self.captureSession canAddOutput:audioDataOutput]) {
[self.captureSession addInput:audioInput];
[self.captureSession addOutput:audioDataOutput];
[self.captureSession startRunning];
// Stop after arbitrary time
double delayInSeconds = 4.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self.captureSession stopRunning];
});
} else {
NSLog(@"Couldn't add them; error = %@",error);
}
...但我不确定如何实现回调:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
?
}
我尝试通过从this SO answer 复制代码来从sampleBuffer 中取出数据并使用AVAudioPlayer 播放它,但是该代码在appendBytes:length: 方法上崩溃。
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
NSLog(@"Length = %i",audioBuffer.mDataByteSize);
[data appendBytes:frame length:audioBuffer.mDataByteSize]; // Crashes here
}
CFRelease(blockBuffer);
NSError *playerError;
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:data error:&playerError];
if(player && !playerError) {
NSLog(@"Player was valid");
[player play];
} else {
NSLog(@"Error = %@",playerError);
}
编辑 CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer 方法返回一个 OSStatus 代码 -12737,根据文档是 kCMSampleBufferError_ArrayTooSmall
Edit2:基于this mailing list response,我将size_t out 参数作为第二个参数传递给...GetAudioBufferList...。这返回 40。现在我只是将 40 作为硬编码值传入,这似乎有效(至少 OSStatus 返回值为 0)。
现在播放器initWithData:error: 方法报错:
Error Domain=NSOSStatusErrorDomain Code=1954115647 "The operation couldn’t be completed. (OSStatus error 1954115647.)" 我正在调查。
我做iOS编程很长时间了,但是直到现在我还没有使用过AVFoundation、CoreAudio等。看起来有十几种方法可以完成相同的事情,具体取决于您希望达到的低级或高级别,因此任何高级概述或框架建议都值得赞赏。
附录
录制到文件
使用AVCaptureSession录制到文件:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(captureSessionStartedNotification:) name:AVCaptureSessionDidStartRunningNotification object:nil];
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
AVCaptureAudioFileOutput *audioOutput = [[AVCaptureAudioFileOutput alloc] init];
if (audioInput && [self.captureSession canAddInput:audioInput] && [self.captureSession canAddOutput:audioOutput]) {
NSLog(@"Can add the inputs and outputs");
[self.captureSession addInput:audioInput];
[self.captureSession addOutput:audioOutput];
[self.captureSession startRunning];
double delayInSeconds = 5.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self.captureSession stopRunning];
});
}
else {
NSLog(@"Error was = %@",error);
}
}
- (void)captureSessionStartedNotification:(NSNotification *)notification
{
AVCaptureSession *session = notification.object;
id audioOutput = session.outputs[0];
NSLog(@"Capture session started; notification = %@",notification);
NSLog(@"Notification audio output = %@",audioOutput);
[audioOutput startRecordingToOutputFileURL:[[self class] outputURL] outputFileType:AVFileTypeAppleM4A recordingDelegate:self];
}
+ (NSURL *)outputURL
{
NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentPath = [searchPaths objectAtIndex:0];
NSString *filePath = [documentPath stringByAppendingPathComponent:@"z1.alac"];
return [NSURL fileURLWithPath:filePath];
}
使用AVAudioRecorder录制到文件:
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0],
AVSampleRateKey,
@(kAudioFormatAppleLossless),
AVFormatIDKey,
nil];
NSError *recorderError;
self.recorder = [[AVAudioRecorder alloc] initWithURL:[[self class] outputURL] settings:recordSettings error:&recorderError];
self.recorder.delegate = self;
if (self.recorder && !recorderError) {
NSLog(@"Success!");
[self.recorder recordForDuration:10];
} else {
NSLog(@"Failure, recorder = %@",self.recorder);
NSLog(@"Error = %@",recorderError);
}
【问题讨论】:
-
好消息是,我已经完成了 Learning Core Audio 的 5 章,并且感觉接近使用 AudioQueues 的解决方案。跨度>
-
您愿意使用第三方音频库吗? Un4Seen 拥有 BASS 音频库,它是 Core Audio 功能的一个非常简单的包装器。它能够做你描述的事情(虽然我不确定你能得到多接近实时)。
-
@BigMacAttack 我想远离 3rd 方库,因为我希望最终能够流式传输到 iOS 设备。好消息是我刚刚写完代码来回答我的问题,我会在清理后发布代码。
-
很公平。并祝贺你弄清楚了!但我还想提一下,对于其他阅读本文的人,BASS 可在 iOS 上使用。
标签: macos audio avfoundation