Unable to display any picture with a decoder

Hello again,
I’ve followed the instructions over there:

Simply put, it doesn’t work, absolutely nothing is displayed.
The only difference from the code given there is that I’ve made it in such a way that pictures are loaded in one function, and displayed in another. This code isn’t completely refined and, as you can see, there are a lot of pointers I’m just losing / I know this and I do intend to take care of it once I manage to simply make something work.

You may notice things that seem weird such as the use of decode32 for a picture that’s actually 16 bits, while I multiply (h*w) by 2 (2 x 8 bits = 16, right? ) but I have tried with a 32bit picture and set the size multiplier accordingly (4), and it didn’t work either.

Here is my temporary code:

t_image *png_to_array(char *filename)
{
	uint32_t error;
	unsigned char *png_data;
	size_t png_data_size;

	error = lodepng_load_file(&png_data, &png_data_size, filename);
	if(error)
	{
		printf("LOADING IMAGE: ERROR!\n");
		printf("error %u: %s\n", error, lodepng_error_text(error));
		return NULL;
	}

	t_image *img = malloc(sizeof(t_image));

	error = lodepng_decode32(&img->png_decoded, &img->png_width, &img->png_height, png_data, png_data_size);
	if(error)
	{
    	printf("error %u: %s\n", error, lodepng_error_text(error));
    	return NULL;
	}
	printf("Creating picture %s, w:%d x h:%d\n", filename, img->png_width, img->png_height);
	return img;
}

void display_png(t_image *img)
{
	lv_img_dsc_t *png_dsc;
	png_dsc = malloc(sizeof(lv_img_dsc_t));
	png_dsc->header.always_zero = 0;
	png_dsc->header.cf = LV_IMG_CF_TRUE_COLOR;
	png_dsc->header.w = img->png_width;
	png_dsc->header.h = img->png_height;
	png_dsc->data_size = img->png_width * img->png_height * 2;
	png_dsc->data = img->png_decoded;

	lv_obj_t * img_obj = lv_img_create(lv_scr_act(), NULL);
	lv_img_set_src(img_obj, &png_dsc);
}

For now I just run a line at startup, after initializing everything:

display_png(png_to_array(“file.png”));

The file exists and is a 16 bit png. Transparency does not matter. I can convert it to anything else.
Later on if this works, I would like to load all my pictures into an array of pictures with “png_to_array” and call them with “display_png” whenever I want it.

The use of pointers is mainly just so I’m making sure I don’t lose any variable from a function to another, as for now I’m unsure of how it all works, but later on I want to be able to store them and free them.

What am I doing wrong?

Interesting. I assume that you don’t see any errors being printed?

You could try enable logging in lv_conf.h, if it exists in your version.

I’ll try logging later on. I’m gonna fix up a few things first. I’ll update this thread if anything comes up

I haven’t looked into logging just yet, but I edited the display function like this:

void display_png(t_image *img)
{
	lv_img_dsc_t *png_dsc;
	png_dsc = malloc(sizeof(lv_img_dsc_t));
	png_dsc->header.always_zero = 0;
	png_dsc->header.cf = LV_IMG_CF_TRUE_COLOR;
	png_dsc->header.w = img->png_width;
	png_dsc->header.h = img->png_height;
	png_dsc->data_size = img->png_width * img->png_height * 4;
	png_dsc->data = img->png_decoded;

	lv_obj_t * img_obj = lv_img_create(lv_scr_act(), NULL);
	lv_img_set_src(img_obj, png_dsc);
}

As a result I obtain this:

Instead of this:

menu

So, huh, that’s weird, but that’s some progress I guess?

What’s LV_COLOR_DEPTH set to? It and the color depth of the decoded image should match.

16 bit. There’s no decode16 function in lodepng it seems …

I have looked further into this lodepng source. I don’t think decode32 and decode24 are related to the image being 32 or 24 bits - they are both functions calling the same function, except with a different macro that are defined las follow in the header:

LCT_RGBA = 6, /*RGB with alpha: 8,16 bit*/
LCT_RGB = 2, /*RGB: 8,16 bit*/

decode32 calls a function with RGBA and decode24 calls a function with RGB, meaning these are probably related to transparency more than anything else (According to the comments)

Either way. I have once again set the size to height * width * 2, I decode the .png file and store it into an array with decode24, and the picture is a 16 bit png. lvlg is set to function in 16 bit - I will not go any higher as the screen I am using does not allow it.

I get a different result, closer to what I want but still corrupt:

For further reference here here is lodepng, the “library” used to decode PNG files and store it into an array ( More of a single source file with a header really)

Updates are slow, as I work on multiple things at the same time, but since it seemed that multiple bits of the same picture were not matched on the Y coordinates, I assumed that the library got the screen width or somesuch wrong. I tried replacing multiple types of data that I got from the decoded file to other values, changing for instance the height and width of the picture. Setting it to half of the screen’s size, for instance, I obtained a picture that indeed was half of the screen size, yet was still disformed in the same way. As for changing the picture size in bits - nothing seemed to change, whatever value I put.

However I started observing different things as I tried different color formats.
I have strictly no idea what these do. I don’t know what differentiates them, I don’t know if the program reads the data in any other way when these are used. What I know is that it completely changed it all.

I set the color format to: LV_IMG_CF_TRUE_COLOR_ALPHA

And suddenly I have this:

Everything is still broken. But now pixels seem to be in the right place?
Maybe my PNG picture had transparency even if it does not use it? I don’t get it.

What color format am I supposed to use? or is there any way that I should edit my pictures so that they fit an already existing color format?

I am greatly ashamed to say that I realized png files cannot be 16 bit unless they are gray scale pngs.
I will work on an alternative tommorow.

EDIT: I converted it all to 8 BIT png.
Same results. From this point I don’t know what to do.

Well, sorry for cluttering this all. I still haven’t found a fix, to be honest. I’ve changed the file, I’ve tried many other things, most of which I forgot at this point.

I kept using TRUE_COLOR because it’s always what got me the best results (pictured above) but… my lvgl is set to 16 bit.
So considering these png pictures are 8 bit, if I’m not mistaken I should be using LV_IMG_CF_INDEXED_8BIT. Except I don’t get a result when I do this. I get “no data”.

I have also tried to set my lv conf to 8BIT but now it destroys the rest of my menus, making them all blue (these are all pictures stored in arrays in C files, that will never change no mater the configurationo of the user).

Any reason why lvgl would refuse to display my current 8 BIT pictures?
Here is how an image is loaded at the moment with my current system:

t_image png_to_array(char *filename)
{
  uint32_t error;

  t_image img;
  img.name = strdup(filename);
  error = lodepng_decode_file(&img.png_decoded, &img.png_width, &img.png_height, filename, LCT_RGB, 8);
  if(error)
  {
      printf("error %u: %s\n", error, lodepng_error_text(error));
      return img;
  }
  printf("Creating picture %s, w:%d x h:%d\n", filename, img.png_width, img.png_height);
  img.png_dsc = malloc(sizeof(lv_img_dsc_t));
  img.png_dsc->header.always_zero = 0;
  img.png_dsc->header.cf = LV_IMG_CF_INDEXED_8BIT;
  img.png_dsc->header.w = img.png_width;
  img.png_dsc->header.h = img.png_height;
  img.png_dsc->data_size = img.png_width * img.png_height * 1;
  img.png_dsc->data = img.png_decoded;
  return img;
}

(also there are no logging options in lv_conf.h)