Special pixel formats on limited but capable HW

Dear Fellows and Creators,

Recently, I have been using LVGL and got it up and working, totally nice experience. :+1:
Since I have used very limited HW, I started with 8bit color depth config.
Once I got it up and running I have spotted that actual alpha-blending of text and color are not proper.

Then I implemented or updated macro/code/docs on my way, with coding rules of the LVGL.
The goal willing to achieve was, to implement another color depth which is 6bit + (2bit alpha).

LVGL with 8bits supports RGB332 and my frame-buffer actually needs ARGB2222.

I am wondering, since the color depths and pixel data formats are somewhat the core of all pixel manipulation, firstly to discuss with developers/maintainers.
After some feedback exchange, I would be confident to make a pull-request.

  1. Is it OK, to call this color depth 6bit?
  2. For start - base struct
typedef union {
    struct {
        uint8_t blue : 2;
        uint8_t green : 2;
        uint8_t red : 2;
    } ch;
    uint8_t full;
} lv_color6_t;
typedef lv_color6_t lv_color8_argb2222_t;
  1. When I add alpha field in struct, then we are at 8bit and I have a collision with a present color depth 8 config. How would you overcome this kind of pixel data format, if it was a feature request?

  2. Maybe it’s a v9-discussion thing, to have custom pixel-data-formats?
    (I don’t know, I am still too fresh to have confident opinions.)

(Since LVGL is not handling YUV and other video formats, maybe the 3bit and 6bit are the last formats, LVGL could support it and then it’s completed - in terms of pixel format handling? :blush: )

With further experimentation, I would see whether to replace SW drawing with IC supported drawing.
As for now, the color and text are properly rendered, when using “hardcoded” 2 alpha bits as 100% opacity (bit 7:6 = 0b11) before BLIT - to dirty I guess?

I think the work, that had to be done, to get another pixel-data-format support, was not problematic.
Here I haven’t found no rules written or some kind of how-to approach to support different formats, that they could be accepted in the mainline?

Hope that somebody could help, navigate me towards the right conclusion(s).

Thank you for your focus!

Cheers
Ziga

Hi,

we are already working on v9 and one the issues we area about to solve is being more flexible about color format support. Some work is already done in this regard. You can specify a convert_cb which can be used to convert the rendered image to any format. I suggest using RGB565 (16 bit) and convert it to ARGB222. If you set disp_drv->screen_transp = 1 LVGL will render all pixels with an extra A8 alpha byte so finally you get 3 byte*pixel which can be converted to ARGB2222.

Is it something you can use?

Dear Gabor,

Thank you for pointing out this idea.
Will move to master branch and experiment with convert_cb.

It should be a quick way to solve the problem, but on the long run and performance wise,
I will have to support the drawing API for the specific chip.
At the moment draw/sw is consuming too much resources on MCU, and can be offloaded to GPU.

GPU has alpha-blending support and 8bit format is the best way to go for me.
And memory-wise is ARGB222 also quite handy.

Firstly, I have to play more to have more :slight_smile:

Cheerz
Ziga

1 Like