im building a remote controlled car with an esp-cam which should broadcast a video stream at 240x176 pixels, i want to display said stream on a ttgo t-watch which has a 240x240 display.
What MCU/Processor/Board and compiler are you using?
esp32
What LVGL version are you using?
current i think
What do you want to achieve?
display cam stream on watch
What have you tried so far?
dont even know how to start
Code to reproduce
Add a code snippet which can run in the simulator. It should contain only the relevant code that compiles without errors when separated from your main code base.
The code block(s) should be formatted like:
/*You code here*/
Screenshot and/or video
If possible, add screenshots and/or videos about the current state.
thank you for replying. i read through canvas, it looks like a container object to combine things/make masks, but found lv_canvas_draw_img(canvas, x, y, &img_src, &draw_dsc), and saw " A Canvas inherites from Image where the user can draw anything." the docs for image widget say the image source can be:
a variable in the code (a C array with the pixels).
at the moment i think its mjpeg, i know for sure it outputs stills as jpeg at least. but im going to be writing the code for that too, so i should have control over it if theres an easier way
Okay. Now what you need is a way of decoding the MJPEG stream into a series of pictures, then these can be passed to an image or canvas widget for display.
there is already a second url explicitly for still jpeg frames, can i use that as an image source? if theres no way to grant screen space directly, i would expect it to be faster than trying to extract a frame from a video stream
That could work but I’m not sure whether the camera will be able to keep up with the number of HTTP requests being made. What frame rate are you aiming for?
if i can get 10-15/second ill be happy, but if they are stable/consistent 5 is probably enough. i saw in another thread about using an image decoder, would something like tJpgDec be appropriate?
We actually have a JPEG decoder library already (lv_lib_split_jpg). If you use that, all you should need to do is fetch a JPEG over HTTP into a buffer, and then use a fake image descriptor. Here is an example - it’s a bit incomplete but I think it gives the general idea:
lv_obj_t *img_obj;
static void update_img(void *downloaded_jpeg_buf, size_t buf_size) {
static lv_img_dsc_t jpeg_dsc = {
.header.always_zero = 0,
.header.w = <camera width>,
.header.h = <camera height>,
.header.cf = LV_IMG_CF_RAW,
};
/* Set the buffer location and size each time in case it changes */
jpeg_dsc.data = downloaded_jpeg_buf;
jpeg_dsc.data_size = buf_size;
lv_img_cache_invalidate_src(&jpeg_dsc); /* invalidate JPEG so it gets decoded again */
lv_img_set_src(img_obj, &jpeg_dsc);
}
You would want to call update_img each time you get a new JPEG from the camera.
The bottleneck here will probably be how quickly HTTP requests can be sent and responded to, and how fast the JPEG can be decoded. 5-10fps is probably doable.
Hi! I have a logitech camera and ESP32S3, i was read 15pic/second from camera over USB by ESP. So How to show pictures with LVGL? Can you give me some sample code?
I was show a picture JPG (convert to C array with online tools Online image converter - BMP, JPG or PNG to C array or binary | LVGL) success. So how to convert pictures data to C array with my ESP32S3?
Thank you!