【发布时间】:2014-10-31 14:26:39
【问题描述】:
首先,我是 c 和目标 c 的新手
我尝试 fft 一个音频缓冲区并绘制它的图表。 我使用音频单元回调来获取音频缓冲区。回调带来 512 帧,但在 471 帧之后它带来 0。(我不知道这是否正常。它曾经带来 471 帧充满数字。但现在不知何故 512 帧在 471 之后为 0。请让我知道这是否正常)
无论如何。我可以从回调中获取缓冲区,应用 fft 并绘制它。这很完美。这是下面的结果。只要我在每个回调中获得缓冲区,图形就非常平滑
但在我的情况下,我需要 3 秒的缓冲区才能应用 fft 和绘图。所以我尝试连接来自两个回调的缓冲区,然后应用 fft 并绘制它。但结果并不像我预期的那样。虽然上面的在记录过程中非常平滑和精确(只有 18 和 19 khz 的幅度变化),但当我连接两个缓冲区时,模拟器主要显示两个不同的视图,它们之间的交换速度非常快。它们显示在下面。当然,它们基本上显示 18 和 19 khz。但我需要精确的 khz,这样我就可以为我正在开发的应用程序应用更多算法。
这是我的回调代码
//FFTInputBufferLen, FFTInputBufferFrameIndex is gloabal
//also tempFilteredBuffer is allocated in global
//by the way FFTInputBufferLen = 1024;
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
UInt32 bus1 = 1;
CheckError(AudioUnitRender(effectState.rioUnit,
ioActionFlags,
inTimeStamp,
bus1,
inNumberFrames,
ioData), "Couldn't render from RemoteIO unit");
Float32 * renderBuff = ioData->mBuffers[0].mData;
ViewController *vc = (__bridge ViewController *) inRefCon;
// inNumberFrames comes 512 as I described above
for (int i = 0; i < inNumberFrames ; i++)
{
//I defined InputBuffers[5] in global.
//then added 5 Float32 InputBuffers and allocated in global
InputBuffers[bufferCount][FFTInputBufferFrameIndex] = renderBuff[i];
FFTInputBufferFrameIndex ++;
if(FFTInputBufferFrameIndex == FFTInputBufferLen)
{
int bufCount = bufferCount;
dispatch_async( dispatch_get_main_queue(), ^{
tempFilteredBuffer = [vc FilterData_rawSamples:InputBuffers[bufCount] numSamples:FFTInputBufferLen];
[vc CalculateFFTwithPlotting_Data:tempFilteredBuffer NumberofSamples:FFTInputBufferLen ];
free(InputBuffers[bufCount]);
InputBuffers[bufCount] = (Float32*)malloc(sizeof(Float32) * FFTInputBufferLen);
});
FFTInputBufferFrameIndex = 0;
bufferCount ++;
if (bufferCount == 5)
{
bufferCount = 0;
}
}
}
return noErr;
}
这是我的 AudioUnit 设置
- (void)setupIOUnit
{
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
CheckError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of AURemoteIO");
UInt32 one = 1;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on AURemoteIO");
// I removed this in order to not getting recorded audio back on speakers! Am I right?
//CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on AURemoteIO");
UInt32 maxFramesPerSlice = 4096;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on AURemoteIO");
UInt32 propSize = sizeof(UInt32);
CheckError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on AURemoteIO");
AudioUnitElement bus1 = 1;
AudioStreamBasicDescription myASBD;
myASBD.mSampleRate = 44100;
myASBD.mChannelsPerFrame = 1;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mBytesPerFrame = sizeof(Float32) * myASBD.mChannelsPerFrame ;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerPacket = myASBD.mFramesPerPacket * myASBD.mBytesPerFrame;
myASBD.mBitsPerChannel = sizeof(Float32) * 8 ;
myASBD.mFormatFlags = 9 | 12 ;
// I also remove this for not getting audio back!!
// CheckError(AudioUnitSetProperty (_rioUnit,
// kAudioUnitProperty_StreamFormat,
// kAudioUnitScope_Input,
// bus0,
// &myASBD,
// sizeof (myASBD)), "Couldn't set ASBD for RIO on input scope / bus 0");
CheckError(AudioUnitSetProperty (_rioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
bus1,
&myASBD,
sizeof (myASBD)), "Couldn't set ASBD for RIO on output scope / bus 1");
effectState.rioUnit = _rioUnit;
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = (__bridge void *)(self);
CheckError(AudioUnitSetProperty(_rioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&renderCallback,
sizeof(renderCallback)), "couldn't set render callback on AURemoteIO");
CheckError(AudioUnitInitialize(_rioUnit), "couldn't initialize AURemoteIO instance");
}
我的问题是:为什么会发生这种情况,为什么当我连接两个缓冲区时输出有两个主要的不同视图。还有另一种收集缓冲区和应用 DSP 的方法吗?我做错了什么!如果我连接的方式是正确的,我的逻辑不正确吗? (虽然我检查了很多次)
在这里我想说:我怎样才能获得完美状态的 3 sn 缓冲区
我真的需要帮助,最好的问候
【问题讨论】:
-
这听起来你的渲染回调中有太多的计算步骤。只有两个提示:降低采样率或用简单的东西替换
dispatch_async部分,看看我是对还是错。 -
嗨,迈克尔,感谢您的评论。我需要 44100 的采样率,而且我是新手,老实说,我除了 dispatch_async 什么都不知道
标签: ios objective-c core-audio audio-recording audiounit