How do I combine a LVGL GUI and a live video stream

I have an application which outputs live video into a rectangle in a frame buffer. I would like to use LVGL to put a GUI around this rectangle.

I have created a simple GUI with two buttons and a mouse cursor. This is output properly into a linux frame buffer. The live video normally paints over the GUI but when I move the mouse the whole screen is repainted.

What is would be the proper way to have LVGL not paint an area in the frame buffer?

Hi,

Do you have 2 frame buffers (one for the video and another for the UI and you hardware merges them)? If so you need to enable LV_COLOR_SCREEN_TRANSP.
Learn more here: https://docs.lvgl.io/latest/en/html/overview/display.html#transparent-screens

If you have only one buffer then you can create an lv_img_dsc_t variable manually in which data points to the memory where the video is played. After that you can use this lv_img_dsc_t in a normal lv_img widget. Note that the image needs to be invalidated when a new frame is ready. (lv_obj_invalidate(my_img);)
Example:

lv_img_dsc_t video_img = {
  .header.always_zero = 0,
  .header.w = 100,
  .header.h = 100,
  .data_size = 100*100*sizeof(lv_color_t),
  .header.cf = LV_IMG_CF_TRUE_COLOR,
  .data = (const uint8_t *) video_buf,
};

I’m also trying to do an OSD menu on a 240x240 screen with ESP32. So the background screen is displaying a mjpeg video. Each frame is not loaded to a full frame buffer but instead two 240x16 buffer and pushed to the screen by dma. With the hardware in hand I could not implement any full frame buffers.
My question is that is it possible to make OSD menu happen with the above ram size limitation?
With the transparent screen method, how do I update the screen background in chunks and sync with the lvgl task?
With the image object solution, is it possible to create a custom decoder that read instead of a line but a 16 line block (since the images in the video are compressed in chunks not in lines)?

Note that you can’t write the same screen (layer) with LVGL and by hand too. The JPEG stream also should be managed by LVGL.

You can write an image decoder. You can ask your JPEG decoder to give 16 lines and use it 16 times in the decoder’s read line callback. After that get the next chunk from JPEG.

Hello! Sorry for reviving the post, but I think an answer should stay here :slight_smile:

I’m also trying to make streams in lvgl. So I have to write my own image decoder and invalidate an object(image) when I got a new frame. It’s a natural desire to make this logic inside decoder, as I’m trying to implement this inside lv_bindings_js. How could I do this? In other words, how can I get an obj instance inside the lv_res_t decoder_open(lv_img_decoder_t * decoder, lv_img_decoder_dsc_t * dsc) function to invalidate it later with lv_obj_invalidate?

1 Like

please add demo with one video stream display with UI to control the video. like capture image, start and stop button. If possible.

A project I recently wrote happens to serve as a reference: LVGL+Linux camera - My projects - LVGL Forum

1 Like

I am using it for Renesas RZV2L arm based processor. I am trying to develop it with help of your reference. For cross compilation I do environment set up(poky-glibc-x86_64-core-image-weston-aarch64-smarc-rzv2l-toolchain-3.1.21 ) .
make_error.txt (41.7 KB)

In your code when I tried to make I got error. I have added the error log. Anything specific I need to change?
I would be great if can help me.

Thank you.

This is the environment of my computer:

This is the compressed package for the project I am using: http://photos.100ask.net/lv_100ask_linux_camera.tar.xz

I am using
CC ?= gcc
#CC := arm-buildroot-linux-gnueabihf-gcc.
do u think it is because of it?

Please check here: https://github.com/100askTeam/lv_100ask_linux_camera/blob/master/Makefile#L9