Hello, I have an LVGL bitmap image stored in a .bin file. Pixel format is [r|g|b][a] being [r|g|b] coded in 2 bytes as 5-6-5 color format. I am sure of this, because I’ve plotted the image with a little program coded myself in C#. Now, I know that the image has width=164 and height=128 and I know that the first 4 bytes should be the file header, and it is, in exadecimal format:
[0x05|0x90|0x02|0x10]
After a brief analysis on some other images of other sizes I can notice that the first byte (0x05) remains always the same for images true_color_alpha, and 0x04 for images true_color only.
Now, I know that with and height can be 2048 each at max, so 12 bits for width and 12 bits for height are enough. But, I cannot find the values 164 and 128 in the remaining 3 bytes [0x90|0x02|0x10] in any way. I’ve tried to reverse the bits in every possible way, but nothing!
Where are the info about image size stored ?
Can you help me ?
===== EDIT ======
The bin image files that I create at the moment are made by the “lv_img_conv” open source script, written in type script and very slow, I need to create the same bin image files in straight C#. I tried to debug “lv_img_conv” source code with visual studio code, but it is taking me too much time to solve problems in starting dedugging sessions themselves.
For bin storing width and height isnt in file data. Your job is place info in code…
Or rename bmp file and use it as bin. Better q is for what you use bin?
I need to create bin files straight from bitmaps in memory, bypassing the lv_img_cvt library. I want to do it in C# because it’s much faster. We need bin files for an external device that reads them through lvgl. Now that you mentioned it, we have also a configuration file that contains with and height information of the image. But then, what should I write in the first 4 bytes of the header of the bin file to make it identical to that generated by lv_img_cvt ?
the first 12 bytes of an LVGL BIN image lays directly into an lv_image_header_t
structure. There may or may not be an additional 12 bytes after that depending on use of compression.
typedef struct {
uint32_t magic: 8; /**< Magic number. Must be LV_IMAGE_HEADER_MAGIC*/
uint32_t cf : 8; /**< Color format: See `lv_color_format_t`*/
uint32_t flags: 16; /**< Image flags, see `lv_image_flags_t`*/
uint32_t w: 16;
uint32_t h: 16;
uint32_t stride: 16; /**< Number of bytes in a row*/
uint32_t reserved_2: 16; /**< Reserved to be used later*/
} lv_image_header_t;
so the first byte is going to be 0x19
for LVGL 9.x.x, next byte is going to be one of the LV_COLOR_FORMAT_*
enum items. Next 2 byte are for compression and premultiplying flags
then it’s 2 bytes for the width, 2 bytes for the height and 2 bytes for the stride. The last 2 bytes are 0x00
Thank you for your answer, but this is not my case. A bin image file is exactly:
(width x height x 3 byte/pixel) + 4 bytes extra,
There aren’t that 12 extra bytes.
The parameters for the conversion from png to bin are:
CF_TRUE_COLOR_ALPHA , binary-format 565
Other ideas ?
Simply read and understand lvgl/src/widgets/lv_img.c at v8.3.11 · lvgl/lvgl or lvgl/src/widgets/image/lv_image.c at master · lvgl/lvgl
here
lv_result_t res = lv_image_decoder_get_info(src, &header);
in
/*=====================
* Setter functions
*====================*/
void lv_img_set_src(lv_obj_t * obj, const void * src)
Well here is the image converter for LVGL. This is specifically the place that creates the header information in the binary file.
as you can see it is 12 bytes long and it directly drops into an lv_image_header_t
structure. I am not making this stuff up that’s for sure.
Q is about oficial v 8 converter.
That is not stated in the question nor in the title for the topic. that is an assumption.
Hello, I eventually decided to walk through the source code of the lv_img_conv library and found the solution. I write it here in case somebody else need to do the same thing. The code that computes the 32-bit header is in the convert.ts file, in the Converter::convert() function:
($lv_cf = 4 or 5, based on the color format used)
var $header_32bit = ($lv_cf | (this.w << 10) | (this.h << 21)) >>> 0;
var finalBinary = new Uint8Array(this.d_out.length + 4);
finalBinary[0] = ($header_32bit & 0xFF);
finalBinary[1] = ($header_32bit & 0xFF00) >> 8;
finalBinary[2] = ($header_32bit & 0xFF0000) >> 16;
finalBinary[3] = ($header_32bit & 0xFF000000) >> 24;
I am not aware of the source code that loads the bin image in the device, maybe they use an older version of LVGL, or maybe they use just another version alltogether (LittlevGL I’ve read in the comments). Thank you very much for your interest, however.