Java – Using ffmpeg-encoded video from javacv on Android causes native code to crash

Using ffmpeg-encoded video from javacv on Android causes native code to crash… here is a solution to the problem.

Using ffmpeg-encoded video from javacv on Android causes native code to crash

Note: I’ve updated this since I originally posed this issue to reflect some of what I’ve learned in loading live camera images into the ffmpeg library.

I’m using ffmpeg in javacv compiled for Android to encode/decode video for my app. (Note that initially, I tried ffmpeg-java, but it has some incompatible libraries).

Original problem: The problem I’m having is that I’m currently using every frame as a Bitmap (just a plain android.graphics. bitmap) and I don’t know how to populate it into the encoder.

Solution in ffmpeg for javacv: With avpicture_fill(), the Android format should be YUV420P, although I can’t verify until my encoder issue (below) is resolved.

avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil. PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)

Now the problem: the thread that should have actually encoded the data crashed. I get a big native code stack trace that I can’t understand. Does anyone have any suggestions?

This is the code I used to instantiate all ffmpeg libraries :

    avcodec.avcodec_register_all();
    avcodec.avcodec_init();
    avformat.av_register_all();

mCodec = avcodec.avcodec_find_encoder(avcodec. CODEC_ID_H263);
    if (mCodec == null)
    {
        Logging.Log("Unable to find encoder.");
        return;
    }
    Logging.Log("Found encoder.");

mCodecCtx = avcodec.avcodec_alloc_context();
    mCodecCtx.bit_rate(300000);
    mCodecCtx.codec(mCodec);
    mCodecCtx.width(VIDEO_WIDTH);
    mCodecCtx.height(VIDEO_HEIGHT);
    mCodecCtx.pix_fmt(avutil. PIX_FMT_YUV420P);
    mCodecCtx.codec_id(avcodec. CODEC_ID_H263);
    mCodecCtx.codec_type(avutil. AVMEDIA_TYPE_VIDEO);
    AVRational ratio = new AVRational();
    ratio.num(1);
    ratio.den(30);
    mCodecCtx.time_base(ratio);
    mCodecCtx.coder_type(1);
    mCodecCtx.flags(mCodecCtx.flags() | avcodec. CODEC_FLAG_LOOP_FILTER);
    mCodecCtx.me_cmp(avcodec. FF_LOSS_CHROMA);
    mCodecCtx.me_method(avcodec. ME_HEX);
    mCodecCtx.me_subpel_quality(6);
    mCodecCtx.me_range(16);
    mCodecCtx.gop_size(30);
    mCodecCtx.keyint_min(10);
    mCodecCtx.scenechange_threshold(40);
    mCodecCtx.i_quant_factor((float) 0.71);
    mCodecCtx.b_frame_strategy(1);
    mCodecCtx.qcompress((float) 0.6);
    mCodecCtx.qmin(10);
    mCodecCtx.qmax(51);
    mCodecCtx.max_qdiff(4);
    mCodecCtx.max_b_frames(1);
    mCodecCtx.refs(2);
    mCodecCtx.directpred(3);
    mCodecCtx.trellis(1);
    mCodecCtx.flags2(mCodecCtx.flags2() | avcodec. CODEC_FLAG2_BPYRAMID | avcodec. CODEC_FLAG2_WPRED | avcodec. CODEC_FLAG2_8X8DCT | avcodec. CODEC_FLAG2_FASTPSKIP);

if (avcodec.avcodec_open(mCodecCtx, mCodec) == 0)
    {
        Logging.Log("Unable to open encoder.");
        return;
    }
    Logging.Log("Encoder opened.");

mFrameSize = avcodec.avpicture_get_size(avutil. PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT);
    Logging.Log("Frame size - '" + mFrameSize + "'.");
    mPic = new AVPicture(mPicSize);
    mFrame = avcodec.avcodec_alloc_frame();
    if (mFrame == null)
    {
        Logging.Log("Unable to alloc frame.");
    }

This is what I hope to be able to do next :

    BytePointer picPointer = new BytePointer(data);
    int bBuffSize = mFrameSize;

BytePointer bBuffer = new BytePointer(bBuffSize);

int picSize = 0;
    if ((picSize = avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil. PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)) <= 0)
    {
        Logging.Log("Couldn't convert preview to AVPicture (" + picSize + ")");
        return;
    }
    Logging.Log("Converted preview to AVPicture (" + picSize + ")");

VCAP_Package vPackage = new VCAP_Package();

if (mCodecCtx.isNull())
    {
        Logging.Log("Codec Context is null!");
    }

encode the image
    int size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, mFrame);

int totalSize = 0;
    while (size >= 0)
    {
        totalSize += size;
        Logging.Log("Encoded '" + size + "' bytes.");
        Get any delayed frames
        size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, null); 
    }
    Logging.Log("Finished encoding. (" + totalSize + ")");

However, as of now, I don’t know how to put the bitmap in the right section, or if I set that setting correctly.

A few notes about the code:
VIDEO_WIDTH = 352
VIDEO_HEIGHT = 288
VIDEO_FPS = 30;

Solution

After a lot of searching, I found that you have to load pointers in a rather strict and clumsy way. That’s how I made everything work :

Codec settings:

    avcodec.avcodec_register_all();
    avcodec.avcodec_init();
    avformat.av_register_all();

/* find the H263 video encoder */
    mCodec = avcodec.avcodec_find_encoder(avcodec. CODEC_ID_H263);
    if (mCodec == null) {
        Log.d("TEST_VIDEO", "avcodec_find_encoder() run fail.");
    }

mCodecCtx = avcodec.avcodec_alloc_context();
    picture = avcodec.avcodec_alloc_frame();

/* put sample parameters */
    mCodecCtx.bit_rate(400000);
    /* resolution must be a multiple of two */
    mCodecCtx.width(VIDEO_WIDTH);
    mCodecCtx.height(VIDEO_HEIGHT);
    /* frames per second */
    AVRational avFPS = new AVRational();
    avFPS.num(1);
    avFPS.den(VIDEO_FPS);
    mCodecCtx.time_base(avFPS);
    mCodecCtx.pix_fmt(avutil. PIX_FMT_YUV420P);
    mCodecCtx.codec_id(avcodec. CODEC_ID_H263);
    mCodecCtx.codec_type(avutil. AVMEDIA_TYPE_VIDEO);

/* open it */
    if (avcodec.avcodec_open(mCodecCtx, mCodec) < 0) {
        Log.d("TEST_VIDEO", "avcodec_open() run fail.");
    }

/* alloc image and output buffer */
    output_buffer_size = 100000;
    output_buffer = avutil.av_malloc(output_buffer_size);

size = mCodecCtx.width() * mCodecCtx.height();
    picture_buffer = avutil.av_malloc((size * 3) / 2); /* size for YUV 420 */

picture.data(0, new BytePointer(picture_buffer));
    picture.data(1, picture.data(0).position(size));
    picture.data(2, picture.data(1).position(size / 4));
    picture.linesize(0, mCodecCtx.width());
    picture.linesize(1, mCodecCtx.width() / 2);
    picture.linesize(2, mCodecCtx.width() / 2);

Process preview data:

    //(1)Convert byte[] first
    byte[] data420 = new byte[data.length];
    convert_yuv422_to_yuv420(data, data420, VIDEO_WIDTH, VIDEO_HEIGHT);

(2) Fill picture buffer
    int data1_offset = VIDEO_HEIGHT * VIDEO_WIDTH;
    int data2_offset = data1_offset * 5 / 4;
    int pic_linesize_0 = picture.linesize(0);
    int pic_linesize_1 = picture.linesize(1);
    int pic_linesize_2 = picture.linesize(2);

Y
    for(y = 0; y < VIDEO_HEIGHT; y++) 
    {
        for(x = 0; x < VIDEO_WIDTH; x++) 
        {
            picture.data(0).put((y * pic_linesize_0 + x), data420[y * VIDEO_WIDTH + x]);
        }
    }

Cb and Cr
    for(y = 0; y < VIDEO_HEIGHT / 2; y++) {
        for(x = 0; x < VIDEO_WIDTH / 2; x++) {
            picture.data(1).put((y * pic_linesize_1 + x), data420[data1_offset + y * VIDEO_WIDTH / 2 + x]);
            picture.data(2).put((y * pic_linesize_2 + x), data420[data2_offset + y * VIDEO_WIDTH / 2 + x]);
        }
    }

(2)Encode
    Encode the image into output_buffer
    out_size = avcodec.avcodec_encode_video(mCodecCtx, new BytePointer(output_buffer), output_buffer_size, picture);
    Log.d("TEST_VIDEO", "Encoded '" + out_size + "' bytes");

Delayed frames
    for(; out_size > 0; i++) {
        out_size = avcodec.avcodec_encode_video(mCodecCtx, new BytePointer(output_buffer), output_buffer_size, null);
        Log.d("TEST_VIDEO", "Encoded '" + out_size + "' bytes");
        fwrite(output_buffer, 1, out_size, file);
    }

I’m still working on packaging the data, but the ongoing test project @ http://code.google.com/p/test-video-encode/

Related Problems and Solutions