Indexed colors on STM32

I managed to get LVGL working on Nucleo-F767ZI with 800x480 display and internal SRAM memory only using 8-bit indexed colors. DMA2D, even without using it to its full extent, seems to be the best choice for buffer transfers. Although the reference manual (RM0410) does not mention indexed color is supported as an output format, without doing any pixel format conversion or blending bytes are just bytes and with correct width/offset adjustments it works.

  1. To make sure my approach works correctly I need to be certain that the width of the screen area being flushed to display buffer is always a multiple of four so that line of lv_color_t (uint8 in my case) can be treated as a line of uint32 (ARGB). Is it doable?

  2. How to completely disable anti-aliasing? In lv_conf.h I have LV_ANTIALIAS set to zero but fonts are still being anti-aliased.

Please advise.

Regarding point 1:

I face similar problem during adding other accelerator support. My understanding is that LVGL doesn’t use concept of buffer width and stride, which is often required by HW accelerators. I’m trying to figure out how to implement this in a generic way, but with the smallest possible modification of LVGL.

Please correct me if the stride is doable in LVGL. It seems to me that in order add stride feature, many places need to be updated - from image generator on web up to all buffer hangling algorithms.

Created a topic here also: HW acceleration: buffer stride and transform list

Hi,

You can use round_cb if the display driver. See here: https://docs.lvgl.io/latest/en/html/porting/display.html#display-driver

Do you try it wit the built-in fonts?