LVGL just keeps crashing

  • STM32 Cortex M0 on a proprietary board with 2.8" display
  • LVGL v. 9.2 and 8.4 has been tried.
  • No OS
  • I want to achieve a working UI with touch functionality but without need for fancy graphics.
  • I’ve done extensive hardware debugging (~30h) trying to figure out where the problem is. Allocated LV_MEM is more than enough (25% usage, ~1% frag).
  • Tried with LV_MEM_CUSTOM = 1, HEAP size put to 0x2000 - no luck.

Issue is that the processor goes to Hardfault_handler without any way to anticipate it, even with the simplest Hello World example. Depending on the situation, it either crashes after 10 seconds or after 5 minutes or everything between, also even if I just initialize the LVGL without any content being drawn, it won’t stay alive for long. I’ve also done extensive content drawing using RLE compressed images and can get everything to work, but the issue is still the stability. I’ve checked that LV_MEM does not overflow, allocated buffers won’t overflow, plenty of STACK mem, LVGL traces and asserts have been enabled, LV_MEM_JUNK also defined but yet without any hint or root cause. I’ve run the lv_timer_handler alone in the main loop and it still does this. I’ve also commented lv_timer_handler away without any luck. If I remove lv related code, the product works flawlessly.

LV_MEM_SIZE  (8U * 1024U)
__attribute__((aligned(4))) static lv_color_t buf_1[320 * 10];
__attribute__((aligned(4))) static lv_color_t buf_2[320 * 10];
static lv_disp_t * disp;
static lv_disp_draw_buf_t disp_buf;
static lv_disp_drv_t disp_drv;



lv_init();
lv_disp_draw_buf_init(&disp_buf, buf_1, buf_2, 320*10);
lv_disp_drv_init(&disp_drv);
    
disp_drv.draw_buf = &disp_buf;          /*Set an initialized buffer*/
disp_drv.flush_cb = my_flush_cb;        /*Set a flush callback to draw to the display*/
disp_drv.hor_res = 320;                 /*Set the horizontal resolution in pixels*/
disp_drv.ver_res = 240;                 /*Set the vertical resolution in pixels*/

disp = lv_disp_drv_register(&disp_drv); /*Register the driver and save the created display objects*/

lv_example_hello_world();

while(1)
{
lv_timer_handler_run_in_period(5);     //lv_tick_inc(1) called properly in timer interrupt.
}

I’ve run my UI in windows simulator without any issues. 90% of the issues people have had related to LV_MEM size being too small, but I’m 99% sure this is not the case here.

A further question regarding LV_MEM usage … why does LVGL allocate a junk region in the beginning of its mem array with information regarding its mem start address? All the business end seem to be further up but on the other hand not at the top of the allocated region.

your first issue is how you are allocating the frame buffers. In LVGL 9.x the size of lv_color_t is always going to be 4 bytes it never changes. you need to allocate the buffers as uint8_t and the
size needs to be width * height * bytes_per_pixel and if they are partial buffers the optimum size is going to be 1/10th the size. width * height * bytes_per_pixel / 10.

You are allocating a buffer that is 12,800 bytes in size and then you are telling LVGL that the buffer is 3,200 bytes in size.

That’s problem #1

the other thing is without seeing the rest of the code like the flush function and the timer code for lv_tick_inc and also the loop code where lv_task_handler is called it is going to be really hard to help with isolating where the problem is.

From v. 8.4 tutorial:

void lv_disp_draw_buf_init(lv_disp_draw_buf_t * draw_buf, void * buf1, void * buf2, uint32_t size_in_px_cnt)

, where “size_in_px_cnt size of the buf1 and buf2 in pixel count.”

As far as I’m aware, lv_color_t size depends of the chosen color depth (16 bits in my case), which will make it 2 bytes. And by allocating for testing purposes static lv_color_t buf_1[1000], .map file indeed shows a size of 2000 bytes.

I now altered it as per suggestion to

attribute((used, aligned(4))) static uint8_t buf_1[320 x 240 x 2/10]; // 15360
attribute((used, aligned(4))) static uint8_t buf_2[320 x 240 x 2/10]; // 15360

lv_disp_draw_buf_init(&disp_buf, buf_1, buf_2, 7680); //(320*240/10) - px count in one buffer

but still having same issues.

My flush function content is just a call to display driver, where parameters given are area-x1 , -y1, -x2 and -y2 and uint8_t buffer). The display driver then initiates the area to be drawn and sends with DMA the entire buffer. lv_disp_flush_ready(disp_drv) is called if display driver fails for some reason, but mainly it’s done in TxCpltCallback ISR. lv_tick_inc(1) is called in Systick_Handler. I would say that if content is perfect on display, the flushing should be okay. Based on logs, there really has not been problems during flushing (no error prints seen from failing flushes)

lv_timer_handler_run_in_period(5) is called in my superloop, along with other stuff, but it’s visited at least every ms. DISP_REFR_PERIOD is 30

You had mentioned using 9.2 which is why I said you cannot do it the way that you were doing it.

Can you share after which code removal does it work?

Basically after removing lvgl_init() and all associated function calls - program does not see the lvgl anymore and there’s no issues. I have used earlier primitive graphics libraries to send images to the display and have successfully had animated graphics on the display.

This is what I’ve tried now:

  • remove my flush cb content and just inform LVGL everything is done (i.e. removing display driver related code and display communication from the equation). From LV_LOGS and memory view in debugger I can see LVGL is updating the buffers constantly - but it again crashes sooner or later.
  • I suspect something (LVGL related or not) is playing around with memory regions outside its scope or my compiler is not doing correct things with the library. This is of course not a library issue, but I’m not really asking much at the moment. Blue background, white text on screen and that’s it. No updated graphics, nothing and it crashes somewhat randomly inside the loop (I say randomly because it’s dependent on what has been adjusted between builds).

Depending on the allocated LV_MEM_SIZE, the application crashes in different occasions. I’ve managed to pinpoint that when you allocate e.g. 32 kB of memory, it crashes in the middle of first display drawing routine and actually LVGL asserts that memory has run out. But when you change this to 16 kB, everything works for some time. It’s when it starts to allocate memory in tiles, it has a intermittently a corrupted tile_cnt which makes it allocate ridicilous amount of memory. Seems like the memory allocation is not really optimizing logically. From lv_refr.c:

if(tile_cnt == 1) {
refr_configured_layer(layer);
}
else {
/* Don’t draw to the layers buffer of the display but create smaller dummy layers which are using the
* display’s layer buffer. These will be the tiles. By using tiles it’s more likely that there will
* be independent areas for each draw unit. */
lv_layer_t * tile_layers = lv_malloc(tile_cnt * sizeof(lv_layer_t));
LV_ASSERT_MALLOC(tile_layers);
if(tile_layers == NULL) {
disp_refr->refreshed_area = *area_p;
LV_PROFILER_REFR_END;
return;

So makes me wonder, is this library really reliable for anything else than full screen update procedures? Is there a way to simplify the library drawing procedure, even with the downside of it taking longer?

I also was facing similar issue until I saw that some widget property was acquiring so much memory. In my case it was esp32-wroom32-n8r8, and lvgl v9.2.2.

The single API was related to object scaling when I removed it everything worked fine so yes LVGL is a work in progress but you can use it.

Also in 9.2.2 there are 3 rendering mode which might be useful for you. You can go with partial mode and avoid full screen rendering.

I also would suggest you to initialize everything with just single buffer but don’t draw anything and as well as don’t play any example and check if it stabilizes else there might be something in your configuration setting that causing these resets to trigger.