Frame buffers with indexed color?

Hi, I’m contemplating a project with LittlevGL and an STM32H750 which has the LTDC peripheral. One nice feature of the LTDC is indexed color using a look-up table, so you can send arbitrary 24-bit colors to a TFT display but only use 8 (or even 4) bits per pixel, with 256 (or 16) full RGB values stored in the LUT. Would this feature work smoothly with LittlevGL? I’d like to pack two frame buffers into the 1MB internal SRAM with an 800x480 display. That’s not really possible unless using a drastically reduced color mode like RGB332, or indexed color (by far the better solution).

I’m also wondering if there is already some code written to leverage the DMA2D engine in conjunction with the LTDC on STM32 parts. Preferably without bringing in the ST HAL code at all. If not I guess I’ll be writing it :slight_smile:


there is already some code written to leverage the DMA2D engine in conjunction with the LTDC on STM32 parts…

Yes, check the STM32F746-Discovery repository for an example.

Preferably without bringing in the ST HAL code at all.

Unfortunately, no. The driver makes use of Cube, but the necessary files are provided in tree. Out of curiosity, is there a reason why you don’t want to use the HAL?

It’s not a strict requirement, but I’ve found that it usually works out better in the long run. IMO, ST’s code is almost as hard to use as just going direct to the registers, it adds unnecessary bloat, and if anything fails to work, debugging it is a nightmare. That said, if I needed a rapid prototype, I might use it. Anything longer term, no.

I can imagine two solutions but both will slow:

  • In your flush callback convert the colors to indexes. You configure LittlevGL in 8 bit mode (LV_COLOR_DETH 8 lv_conf.h) to have smaller look up. Probably it’s the faster and better solution.
  • Use the set_px_cb callback in the display driver

I don’t completely understand what a lv_disp_buf_t is yet. Is this a buffer covering the entire screen? Or just some work areas used by lvgl?

If I use 8-bit color then there isn’t much point in going to indexed color, because the color space is already very constrained (e.g. 3/3/2). The real benefit of indexed color is having 8 bits per pixel represent everything you need. For example, my application is a voltmeter with large numerals on a solid background. The 256 colors can then be assigned to all of the in-between shades used for anti-aliasing the font, e.g. between blue and white for white numbers on a blue background.

The indexed color LUT is a hardware feature, so at first I thought it should be completely transparent: Just use 8-bit mode and adjust the styles and the LUT so it looks the way I want. But then LittlevGL would not know how to anti-alias correctly.

Fortunately I don’t need any animations, and a frame rate of ~ 4-5 Hz is probably enough. The processor is a fast (400 MHz) Cortex M7 and power consumption is not an issue, so maybe I can get away with the slow method: 16 or 24-bit color in LittlevGL, then “fitting” to the closest color index in set_px_cb.

lv_disp_buf_t is a work area that can be sized to whatever you can accomodate (generally the larger it is , the faster stuff can be drawn).

I’m not sure that LittlevGL supports a palette-based display at present. Internally we always work with regular RGB color.

You don’t need to use 8 bit colors as RGB332 but you use then as indexes. For example:

lv_color_t my_red;
my_red.full = 23;

This way LittlevGL will draw every “my_red” colored object with “color 23”.

In your flush_cb you can copy/use the buffer with the indexed color as you need.

Keep in mind with an indexed palette you can’t use opacity and anti-aliasing because they tries to mix the color channels (R, G, B) linearly.

Is it simple to override color mixing routine?
Purpose: to use STM32-LTDC AL44 and AL88 pixel formats on memory constrained systems.

The main color mixing function id here:

It can be modified, it’s not known how things will behave in this custom environment.

1 Like