I store the image resources in SPI FLASH, and then I want to achieve image scaling and rotation.
From the source code, I can do this in the form of a code file(e.g. test_img.c),
and IMG_SRC_TYPE is LV_IMG_SRC_VARIABLE.
So I wonder if there is a better way to achieve this?
If you are using C files created using the image converter, and your SPI flash is directly addressable, I suggest modifying this line and adding a section attribute to place the bitmap data into SPI flash. This is the easiest solution but it only works if SPI flash can be addressed the same way as normal flash/RAM.
If flash is not directly addressable, you would have to:
Manually store the bitmap data in SPI flash using whatever tool you have available.
When you want to display an image, read the bitmap data from SPI flash into RAM.
Fabricate this structure and set the data member to point to wherever the bitmap data is in RAM.
Thanks for your reply. I know how to read data directly from FLASH to RAM, but many pictures are larger in size and there is no larger RAM space to store the complete picture content, so I wonder if it can be processed in parts.
The image decoder is designed to handle external image formats like PNG or JPEG, but it should also work with a built-in image. As you can see in the source code, it will only allocate a line-sized buffer in RAM when drawing in line-by-line mode.
There will definitely be a performance penalty to doing it this way because the image can only be flushed one line at a time, but it should cost you less RAM.
Unfortunately, zoom and rotate requires the whole image. The reason: imagine that lvgl draws x=10, y=20 pixel of the image. The image is rotated and zoom, so for (10;20) e.g. (87,33) should be drawn. But if only y = 20 line is available lvgl can’t get (87, 33).
A corner case is when rotate = 90° so the line becomes columns. When lvgl draws the pixels of a line (x, x+1, x+2, …) different lines will be required for each pixel (y, y+1, y+2).
The problem is that is that it works the opposite way: NOT read a (random) part, transform it and draw it where required, but to draw an area, get the required image, transform it, and draw it to the current area to draw.
To make it possible a 3rd option should be added here:
Besides native (RGB) and fully buffered non-native images, there could be an option to get the required pixels on demand. However, I’m afraid it’d be very slow especially for large images.