How to add New colour format support for BGRA or BGRA

Hi, I’ve just started working with LVGL in a Linux environment. The framebuffer pixel format is BGRX, while the existing LVGL support appears to be for XRGB and ARGB formats. I initially tried using a for loop to manually swap the R and B bytes, but the performance is quite slow. Could someone please guide me on how to add proper support for the BGRX format in an optimized way, without lag?

This is actually a fairly easy thing to do with simple bit shifting…

void reorder_color_bytes(uint8_t *buf, uint16_t width, wuint16_t height)
{
    uint32_t *buf32 = (uint32_t *)buf;
    uint32_t num_pixels = (uint32_t)(width * height);
    
    while (num_pixels) {
        buf32[0] = (((buf32[0] << 24) & 0xFF000000) | 
                    ((buf32[0] <<  8) & 0x00FF0000) | 
                    ((buf32[0] >>  8) & 0x0000FF00) | 
                    ((buf32[0] >> 24) & 0x000000FF));
        num_pixels--;
        buf32++; 
    }
}

what you should be doing with this is you create a second thread that LVGL runs in and your application runs in. that thread will pass a pointer to the buffer that has just been rendered to over to the main thread that reorders the bytes and sets it to be displayed. Once setting it has been done then you can can call the lv_flush_ready function This allows LVGL to be able to render to the other buffer while the first buffer is being reordered and set to be displayed. By time the reorder is completed and the first buffer has been set the second buffer will be ready to be reordered and set. so you pass that buffer to the main thread so it will do that.

Thank you for your support. I attempted byte swapping, but it significantly reduced the device’s responsiveness. I’m looking for alternative solutions that maintain performance without causing lag

Hey,

Adding a new color format to LVGL’s software renderer is quite modular.

See these as examples: lvgl/src/draw/sw/blend at master · lvgl/lvgl · GitHub

The only problem is that if we add e.g BGRX we need to add it as a destination color format and need to add support for a lot of source image formats as well (that is to blend RGB565, RGB888, L8, etc to BGRX). And then BGRX need to be added as a source format for all destination color formats.

However in lv_draw_sw_blend_to_rgb888.c we might find a clever way to not reimplement the whole thing but just set some indices to tell which is channel is which. E.g. here index 0, 1, 2 can be variables like src_red_index:

What do you think?

Thank you for your valuable inputs, @kisvegabor.

My requirement is to support only RGBA format images (either 32-bit or 24-bit), so there’s no need to handle other color formats.

To support the BGRX format, should I add a separate implementation file or modify the relevant section you pointed out?

Since I’m new to LVGL, I’d appreciate your recommendation on the best approach to achieve this without affecting rendering performance.

Unfortunately to cover all cases 8 formats needs to supported as both destination and source color formats: XRGB, XBGR, RGBX, BGRX, and all with A instead of X.

However, fortunately, the current color order worked in 99% of the cases so far. In the last 1% all was good except B and R needed to be swapped.

However it’s not the case for you, as you need BGRX instead of XRGB. :exploding_head:

What is this system?

This is going to sound crazy but what about reordering of the lv_color_t structure?? Kind of like this…

#ifdef BGRX
typedef struct {
    uint8_t blue;
    uint8_t green
    uint8_t red;
    uint8_t alpha;
} lv_color_t;
#else
typedef struct {
    uint8_t alpha;
    uint8_t red;
    uint8_t green
    uint8_t blue;
} lv_color_t;
#endif

It doesn’t resolve the issue of image blending where the channels are written in a specific order already.

The user wants to have ARGB for inputs from PNG, JPG, etc and have LVGL write the frame buffer as BGRA (or possibly BGR) at the time the rendering is done.

It would be nice to have the ability to set the output byte order of a pixel at runtime but doing that is going to really slow things down if doing the conditional checking at a per pixel level. if it is done as a per iteration thing it would increase the code size by a lot. it would have to be made a compile time option in order to keep the binary size down and to keep the performance up.

I believe that LVGL assumes that if the color format is set to RGB888 that both input and output should be RGB888. I am pretty sure If I set the color format of a display to RGB565 and then I create a canvas widget and try to use ARGB8888 it will not allow me to do this. I MUST set the canvas widget to use RGB565. The display color format should only control what the frame buffer color format is and when data gets copied from any other buffer to the frame buffer a conversion should take place if needed. If rendering directly to the frame buffer then that rendering should take place as the color format that has been set to the display.

I am not 100% sure on how LVGL handles the color format with respect to what is set to lv_display_t and what is able to be used elsewhere for things like the canvas widget.

The system operates in a Linux environment and uses a 32-bit framebuffer with a BGRX color format

I see. Implanting these new color formats is not on our roadmap now, do we can do two things:

  • it can be implemented by the community (we will review and help)
  • we implement t it as a service.

If the service-way is an option for you, please contact me via lvgl@lvgl.io