【问题标题】:H264 Video Streaming over RTMP on iOSiOS上通过RTMP的H264视频流
【发布时间】:2013-07-16 13:55:55
【问题描述】:

经过一番挖掘,我发现了一个库,它可以在编写 .mp4 文件时从它中提取 NAL 单元。我正在尝试使用libavformatlibavcodec 将这些信息打包到RTMP 上。我使用以下方法设置视频流:

-(void)setupVideoStream {
    int ret = 0;
    videoCodec = avcodec_find_decoder(STREAM_VIDEO_CODEC);

    if (videoCodec == nil) {
        NSLog(@"Could not find encoder %i", STREAM_VIDEO_CODEC);
        return;
    }

    videoStream                                 = avformat_new_stream(oc, videoCodec);

    videoCodecContext                           = videoStream->codec;

    videoCodecContext->codec_type               = AVMEDIA_TYPE_VIDEO;
    videoCodecContext->codec_id                 = STREAM_VIDEO_CODEC;
    videoCodecContext->pix_fmt                  = AV_PIX_FMT_YUV420P;
    videoCodecContext->profile                  = FF_PROFILE_H264_BASELINE;

    videoCodecContext->bit_rate                 = 512000;
    videoCodecContext->bit_rate_tolerance       = 0;

    videoCodecContext->width                    = STREAM_WIDTH;
    videoCodecContext->height                   = STREAM_HEIGHT;

    videoCodecContext->time_base.den            = STREAM_TIME_BASE;
    videoCodecContext->time_base.num            = 1;
    videoCodecContext->gop_size                 = STREAM_GOP;

    videoCodecContext->has_b_frames             = 0;
    videoCodecContext->ticks_per_frame          = 2;

    videoCodecContext->qcompress                = 0.6;
    videoCodecContext->qmax                     = 51;
    videoCodecContext->qmin                     = 10;
    videoCodecContext->max_qdiff                = 4;
    videoCodecContext->i_quant_factor           = 0.71;

    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
        videoCodecContext->flags                |= CODEC_FLAG_GLOBAL_HEADER;

    videoCodecContext->extradata                = avcCHeader;
    videoCodecContext->extradata_size           = avcCHeaderSize;

    ret = avcodec_open2(videoStream->codec, videoCodec, NULL);
    if (ret < 0)
        NSLog(@"Could not open codec!");
}

然后我连接,每次库提取一个 NALU 时,它都会将一个包含一个或两个 NALU 的数组返回给我的RTMPClient。处理实际流的方法如下所示:

-(void)writeNALUToStream:(NSArray*)data time:(double)pts {
    int ret = 0;
    uint8_t *buffer = NULL;
    int bufferSize = 0;

    // Number of NALUs within the data array
    int numNALUs = [data count];

    // First NALU
    NSData *fNALU = [data objectAtIndex:0];
    int fLen = [fNALU length];

    // If there is more than one NALU...
    if (numNALUs > 1) {
        // Second NALU
        NSData *sNALU = [data objectAtIndex:1];
        int sLen = [sNALU length];

        // Allocate a buffer the size of first data and second data
        buffer = av_malloc(fLen + sLen);

        // Copy the first data bytes of fLen into the buffer
        memcpy(buffer, [fNALU bytes], fLen);

        // Copy the second data bytes of sLen into the buffer + fLen + 1
        memcpy(buffer + fLen + 1, [sNALU bytes], sLen);

        // Update the size of the buffer
        bufferSize = fLen + sLen;
    }else {
        // Allocate a buffer the size of first data
        buffer = av_malloc(fLen);

        // Copy the first data bytes of fLen into the buffer
        memcpy(buffer, [fNALU bytes], fLen);

        // Update the size of the buffer
        bufferSize = fLen;
    }

    // Initialize the packet
    av_init_packet(&pkt);

    //av_packet_from_data(&pkt, buffer, bufferSize);

    // Set the packet data to the buffer
    pkt.data            = buffer;
    pkt.size            = bufferSize;
    pkt.pts             = pts;

    // Stream index 0 is the video stream
    pkt.stream_index    = 0;

    // Add a key frame flag every 15 frames
    if ((processedFrames % 15) == 0)
        pkt.flags       |= AV_PKT_FLAG_KEY;

    // Write the frame to the stream
    ret = av_interleaved_write_frame(oc, &pkt);
    if (ret < 0) 
        NSLog(@"Error writing frame %i to stream", processedFrames);
    else {
        // Update the number of frames successfully streamed
        frameCount++;
        // Update the number of bytes successfully sent
        bytesSent += pkt.size;
    }

    // Update the number of frames processed
    processedFrames++;
    // Update the number of bytes processed
    processedBytes += pkt.size;

    free((uint8_t*)buffer);
    // Free the packet
    av_free_packet(&pkt);
}

大约 100 帧后,我收到一个错误: malloc: *** error for object 0xe5bfa0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug

我似乎无法阻止这种情况发生。我尝试注释掉 av_free_packet() 方法和 free() 并尝试使用 av_packet_from_data() 而不是初始化数据包并设置数据和大小值。

我的问题是;我怎样才能阻止这个错误的发生,根据wireshark的说法,这些是正确的RTMP h264数据包,但它们只播放黑屏。是否有一些我忽略的明显错误?

【问题讨论】:

  • 您找到解决方案了吗?我正在努力实现同样的目标。当我尝试播放视频时,我得到一些绿框。您使用什么库来读取 NALU?谢谢。

标签: ios memory-management h.264 rtmp libav


【解决方案1】:

在我看来,您的缓冲区溢出并破坏了您的流:

memcpy(buffer + fLen + 1, [sNALU bytes], sLen);

您正在分配 fLen + sLen 字节,然后写入 fLen + sLen + 1 字节。只需摆脱+ 1.

因为您的 AVPacket 是在堆栈上分配的,所以不需要 av_free_packet()。 最后,为 libav 分配额外的字节被认为是一种很好的做法。 av_malloc(size + FF_INPUT_BUFFER_PADDING_SIZE )

【讨论】:

  • 您找到解决方案了吗?我正在努力实现同样的目标。当我尝试播放视频时,我得到一些绿框。您使用什么库来读取 NALU?谢谢。
猜你喜欢
  • 2014-01-19
  • 2012-04-10
  • 1970-01-01
  • 2014-12-25
  • 1970-01-01
  • 1970-01-01
  • 2014-01-24
  • 2016-11-21
  • 1970-01-01
相关资源
最近更新 更多