在ffmpeg的官方例子中有一个muxing.c,这个例子是演示如何用ffmpeg进行打包(muxing),但是这个例子有些问题,说好听点是不完善,说不好听就是有错误。ffmpeg.c是非常完善的,对比ffmpeg.c我发现主要有以下两个错误:
1、在使用avcodec_encode_audio2/avcodec_encode_video2编码前,没有给定时间戳。
2、在main函数的for循环之后,没有flush,也就是还有一些延迟的帧在缓冲中,没有写进输出文件。在编码时,并不是每一个输入帧立即编码得到输出帧,而往往是输入N多帧之后才开始输出帧,我见过最多输入60帧之后才出现第一个输入帧的,那么就出现了一个问题,以输入为循环体,输入结束循环也结束,那么就还有一些帧在缓存中,此时我们需要将其拿出来,编码,再写进输出文件。
下面我以音频为例修改了代码,首先是函数write_audio_frame
static void write_audio_frame(AVFormatContext *oc, AVStream *st) { AVCodecContext *c; AVPacket pkt = { 0 }; // data and size must be 0; AVFrame *frame = NULL; int got_packet, ret, dst_nb_samples; AVRational r = {1, AV_TIME_BASE}; av_init_packet(&pkt); pkt.data = NULL; pkt.size = 0; c = st->codec; if(!frame && !(frame = avcodec_alloc_frame())) return ; else avcodec_get_frame_defaults(frame); get_audio_frame((int16_t *)src_samples_data[0], src_nb_samples, c->channels); /* convert samples from native format to destination codec format, using the resampler */ if (swr_ctx) { /* compute destination number of samples */ dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, c->sample_rate) + src_nb_samples, c->sample_rate, c->sample_rate, AV_ROUND_UP); if (dst_nb_samples > max_dst_nb_samples) { av_free(dst_samples_data[0]); ret = av_samples_alloc(dst_samples_data, &dst_samples_linesize, c->channels, dst_nb_samples, c->sample_fmt, 0); if (ret < 0) exit(1); max_dst_nb_samples = dst_nb_samples; dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, dst_nb_samples, c->sample_fmt, 0); } /* convert to destination format */ ret = swr_convert(swr_ctx, dst_samples_data, dst_nb_samples, (const uint8_t **)src_samples_data, src_nb_samples); if (ret < 0) { fprintf(stderr, "Error while converting\n"); exit(1); } } else { dst_samples_data[0] = src_samples_data[0]; dst_nb_samples = src_nb_samples; } frame->nb_samples = dst_nb_samples; avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt, dst_samples_data[0], dst_samples_size, 0); //下面两句我加的。编码前一定要给frame时间戳 frame->pts = lastpts; lastpts = frame->pts + frame->nb_samples; ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet); //如果没有前两句,编码之后的pts是无效的 if (ret < 0) { fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret)); exit(1); } if (!got_packet) return; pkt.stream_index = st->index; //下面两句我加的,加了才不是提示“encoder did not produce proper pts, make some up”错误 pkt.pts = av_rescale_q(pkt.pts, st->codec->time_base, st->time_base);// pkt.dts = av_rescale_q(pkt.dts, st->codec->time_base, st->time_base);// pkt.duration = av_rescale_q(pkt.duration, st->codec->time_base, st->time_base); /* Write the compressed frame to the media file. */ ret = av_interleaved_write_frame(oc, &pkt); if (ret != 0) { fprintf(stderr, "Error while writing audio frame: %s\n", av_err2str(ret)); exit(1); } avcodec_free_frame(&frame); }修改main函数中的代码,在for(;;)循环之后:
//下面几行是获得delay帧 c = audio_st->codec; for(got_output=1; got_output>0; i++) { av_init_packet(&pkt); pkt.data = NULL; pkt.size = 0; ret = avcodec_encode_audio2(c, &pkt, NULL, &got_output); if (ret < 0) { fprintf(stderr, "Error encoding frame\n"); exit(1); } if (got_output) { // audio_st->pts.val += 1024;//av_rescale_q(1, audio_st->codec->time_base, audio_st->time_base); pkt.pts = av_rescale_q(pkt.pts, c->time_base, audio_st->time_base);//audio_st->pts.val; pkt.dts = av_rescale_q(pkt.dts, c->time_base, audio_st->time_base);//audio_st->pts.val; pkt.duration = av_rescale_q(pkt.duration, c->time_base, audio_st->time_base); ret = av_interleaved_write_frame(oc, &pkt); av_free_packet(&pkt); } }
关于ffmpeg的例子muxing.c,布布扣,bubuko.com
原文:http://blog.csdn.net/relar/article/details/21785673