How to display video stream from esp-cam

Description

im building a remote controlled car with an esp-cam which should broadcast a video stream at 240x176 pixels, i want to display said stream on a ttgo t-watch which has a 240x240 display.

What MCU/Processor/Board and compiler are you using?

esp32

What LVGL version are you using?

current i think

What do you want to achieve?

display cam stream on watch

What have you tried so far?

dont even know how to start

Code to reproduce

Add a code snippet which can run in the simulator. It should contain only the relevant code that compiles without errors when separated from your main code base.

The code block(s) should be formatted like:

/*You code here*/

Screenshot and/or video

If possible, add screenshots and/or videos about the current state.

1 Like

check lv_canvas widget in lvgl docs.

thank you for replying. i read through canvas, it looks like a container object to combine things/make masks, but found lv_canvas_draw_img(canvas, x, y, &img_src, &draw_dsc), and saw " A Canvas inherites from Image where the user can draw anything." the docs for image widget say the image source can be:

  • a variable in the code (a C array with the pixels).
  • a file stored externally (like on an SD card).
  • a text with Symbols.

this makes it sound like its limited to a single frame at a time, and that needs to be stored before it can be displayed?

as i see u need to show picture from camera

from docs about lv_canvas:
An array of pixels can be copied to the canvas with `lv_canvas_copy_buf(canvas, buffer_to_copy, x, y, width, height).

To generate a pixel array from a PNG, JPG or BMP image, use the Online image converter tool and set the converted image with its pointer:

dont think thats going to work here. unless the converter has an API that allows it to be used remotely

To start with, what is the format of the video stream from the ESP-CAM? That needs to be known before figuring out how to integrate with LVGL.

at the moment i think its mjpeg, i know for sure it outputs stills as jpeg at least. but im going to be writing the code for that too, so i should have control over it if theres an easier way

so i have confirmed it to be an http stream, and i have also found a way to use rtsp, both i believe streaming mjpeg

Okay. Now what you need is a way of decoding the MJPEG stream into a series of pictures, then these can be passed to an image or canvas widget for display.

there is already a second url explicitly for still jpeg frames, can i use that as an image source? if theres no way to grant screen space directly, i would expect it to be faster than trying to extract a frame from a video stream

That could work but I’m not sure whether the camera will be able to keep up with the number of HTTP requests being made. What frame rate are you aiming for?

if i can get 10-15/second ill be happy, but if they are stable/consistent 5 is probably enough. i saw in another thread about using an image decoder, would something like tJpgDec be appropriate?

We actually have a JPEG decoder library already (lv_lib_split_jpg). If you use that, all you should need to do is fetch a JPEG over HTTP into a buffer, and then use a fake image descriptor. Here is an example - it’s a bit incomplete but I think it gives the general idea:

lv_obj_t *img_obj;
static void update_img(void *downloaded_jpeg_buf, size_t buf_size) {
    static lv_img_dsc_t jpeg_dsc = {
        .header.always_zero = 0,
        .header.w = <camera width>,
        .header.h = <camera height>,
        .header.cf = LV_IMG_CF_RAW,
    };
    /* Set the buffer location and size each time in case it changes */
    jpeg_dsc.data = downloaded_jpeg_buf;
    jpeg_dsc.data_size = buf_size;
    lv_img_cache_invalidate_src(&jpeg_dsc); /* invalidate JPEG so it gets decoded again */
    lv_img_set_src(img_obj, &jpeg_dsc);
}

You would want to call update_img each time you get a new JPEG from the camera.

The bottleneck here will probably be how quickly HTTP requests can be sent and responded to, and how fast the JPEG can be decoded. 5-10fps is probably doable.

Hi! I have a logitech camera and ESP32S3, i was read 15pic/second from camera over USB by ESP. So How to show pictures with LVGL? Can you give me some sample code?
I was show a picture JPG (convert to C array with online tools Online image converter - BMP, JPG or PNG to C array or binary | LVGL) success. So how to convert pictures data to C array with my ESP32S3?
Thank you!