Figuring out display buffer size

Description

Code to reproduce

Hi, I have a display with 240x240 resolution, what would be the most optimized way to define rendering buffers?
Documentation states that it should be 1/10 screen sized buffers, what does it actually mean? If my resolution is 240x240, then frame buffer should be 1/10 * (240 * 240) = 5760?

So buf 1 = 5760 * sizeof(lv_color_t) and buf 2 = 5760 * sizeof(lv_color_t) ?

 lv_color_t* buf1 = heap_caps_malloc(DISP_BUF_SIZE * sizeof(lv_color_t), MALLOC_CAP_SPIRAM);
    assert(buf1 != NULL);

    /* Use double buffered when not working with monochrome displays */
    lv_color_t* buf2 = heap_caps_malloc(DISP_BUF_SIZE * sizeof(lv_color_t), MALLOC_CAP_SPIRAM);
    assert(buf2 != NULL);

That is correct. However, don’t be afraid to experiment. If you have memory restrictions and FPS is not super critical you are free to use less memory. One thing with LVGL is you also need memory for objects info (the one for LV_MEM_SIZE). So you have to find a balance here. Personally in my use case for display of (320240) I use two buffers of size 3205*lv_color_t and I am satisfied with FPS. So, as usual, your mileage may vary

1 Like