Sometimes using True Color format is inviable due the Carray size being way bigger than the actual image and 8 bit seens to have some colors missing but also has almost 10x less size than a TRUE Color RGB so maybe a CD INDEXED 16 BIT color format would be usefull
Hi,
I don’t get it.
So the problem is that RGB565 doesn’t have enough colors, so a INDEXED_16BIT using a 64k palette with 32bit colors could be a solution?
i dont know much about how these color types works but what happens to me is that using a image converted to CF_TRUE_COLOR makes the C array size too big to be used in my case and using the same image but converted to CF_RAW dosent show anything in the display and using CF_INDEXED_8_BIT works but the image colors looks kinda missing some color range
What is the color depth of your display?
16 bit RGB 565
In this case there isn’t to many options between 8 bit indexed and 16 bit RGB You can try to enable dithering for the indexed image.
Hmmm, iven using dithering the gradients and parts of the image that are smoother dosent gets a color blending very well, this is the image i was testing with if you can try or have a config in mind that could help me
I’ve tried out the image and I think the problem is that the 16 bit display simply hasn’t got enough colors.
There might be special dithering algorithms for 24 bit to 16 bit but it’s not that common.
Hello
As Kisvegabor said, the issue here is that there simply are not enough colors on a RGB565 display to display an image with such subtle differences in color, dithering also only helps to a certain extent. Your best bet is picking a simpler image or reducing the colors yourself using image editing software.
Interesting so the problem would be more about the display colors, that makes sense @kisvegabor you mentioned a 24 bit dithering but LVGL dosent have 24 bits support so would it work?
16-24 bit dithering means that a smart algorithm gets the original 24 bit image and converts it to 16 bit. Where the 16 bit color resolution is not enough it adds ±1 so your eye in average will see the 24 bit color.
For example let’s if you needed to have red = 101 somewhere, but on 16 bit you can have only 100 and 105. So the algorithm will add 100, 100, 105, 100, 100 = 101 in average
(These are just made up numbers to illustrate the idea)
-
From where you have info no 24 bit supported?
-
Try one tone colour smooth gradients on RGB565 is waste of time.
LVGL.H file there you need to choose between 8/16/32 bits
FYI, LVGL v9 will support RGB888