【发布时间】:2015-11-05 12:41:09
【问题描述】:
所以我最近稍微研究了一下核心数据,但我还是个新手。我很难理解我正在使用哪些数据以及它如何影响整个数据流。因此,对于某些背景,我有一个使用 webRTC 在手机之间进行视频/音频流传输的应用程序。但是,我想查看通过麦克风输入设备的数据以及通过扬声器输出的数据。我查看了 AurioTouch 演示和 Core Audio,目前我有这个:
- (void)setupIOUnit
{
// Create a new instance of AURemoteIO
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
AudioComponentInstanceNew(comp, &rioUnit);
// Enable input and output on AURemoteIO
// Input is enabled on the input scope of the input element
// Output is enabled on the output scope of the output element
UInt32 one = 1;
AudioUnitSetProperty(rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one));
AudioUnitSetProperty(rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one));
// Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
// of samples it will be asked to produce on any single given call to AudioUnitRender
UInt32 maxFramesPerSlice = 4096;
AudioUnitSetProperty(rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32));
// Get the property value back from AURemoteIO. We are going to use this value to allocate buffers accordingly
UInt32 propSize = sizeof(UInt32);
AudioUnitGetProperty(rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize);
// Set the render callback on AURemoteIO
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = NULL;
AudioUnitSetProperty(rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback));
NSLog(@"render set now");
// Initialize the AURemoteIO instance
AudioUnitInitialize(rioUnit);
[self startIOUnit];
return;
}
- (OSStatus)startIOUnit
{
OSStatus err = AudioOutputUnitStart(rioUnit);
if (err) NSLog(@"couldn't start AURemoteIO: %d", (int)err);
return err;
}
渲染回调函数
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = noErr;
// the data gets rendered here
err = AudioUnitRender(rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
if (ioData->mBuffers[0].mDataByteSize >= 12) {
NSData *myAudioData = [NSData dataWithBytes: ioData->mBuffers[0].mData length:12];
NSLog(@" playback's first 12 bytes: %@", myAudioData);
}
for (UInt32 i=0; i<ioData->mNumberBuffers; ++i) {
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
return err;
}
这会打印出一些数据,我目前不知道是麦克风输入还是扬声器输出。令我不安的是,即使清除了ioData的缓冲区,我仍然可以在另一部手机上获得音频,并且可以播放另一部手机发送的音频。这有点暗示我既没有触摸麦克风输入也没有触摸扬声器输出。
我已经看到这条线的一些不同的参数:
AudioUnitSetProperty(rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback));
我想知道我是否只是有这些错误或什么。另外,是不是这一行:
err = AudioUnitRender(rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
受 AudioUnitSetProperty 影响?在这种情况下设置 1 有什么作用?
任何帮助都会很棒。理想情况下,我希望能够对扬声器输出数据(可能存入文件)以及麦克风输入进行采样。
【问题讨论】:
-
也许您指的是 Core Audio 或 Core Media 而不是 Core Data?
-
已修复并将其更改为核心音频
标签: ios objective-c core-audio