In LVGL v9.3, we have the option to load SVG files and have them render into an image widget.
This works well (although I did run into some issues with some particular SVGs, for which I’ll open a separate topic).
However, currently this first reads the whole file into memory. Some of these files can be over 20KB, which is quite a lot.
Is there a configuration option to have it use a “streaming” option (e.g. read the parts it needs when it needs it)?
I noticed the same “issue” for the LodePNG renderer.
The answer to this is no there really isn’t. This is because the image file needs to be decoded and converted into RAW RGB buffer data. There is no way to know exactly where in the encoded data specific areas of RAW RGB data would need to be read from.
as an example. If an image that that is 800 x 600 and stored as a PNG and as the rendering gets done areas of the PNG are what would need to be written to the frame buffer. Most times a display might be 800 x 600 as an example but the frame buffer would be 1/10th of the size. So if an area has a start point of 200 x 100 and the width is 100 and the height is 200 there is no way to locate that specific area in the encoded data. The data would need to be loaded in it’s entirety and then converted into RGB so the area is then able to be written to the frame buffer.
There is a choice that has to be made. load the whole file and decode the whole file each and every single time a piece of the data is needed or load the data decode it and hold the decoded data in memory so it is able to be accessed as many times as needed. The first way is going to be a huge performance hit. It would take forever to refresh a display. Where as the second way would be really fast by comparison but it comes at the cost of additional memory use.
SVG is a nice format to deal with as far as it being space saving as a file but in the end a decoded SVG file is going to be the exact same memory use as a decoded PNG. It doesn’t matter if it’s an 800 x 600 PNG or an 800 x 600 JPG or an SVG that is being rendered at 800 x 600. The decoded data is going to use exactly the same amount of RAM which is 800 * 600 * 3
Now if you want to lessen the amount of memory used you can optionally decrease the image cache size so LVGL will not hold decoded data in RAM and then you can tile a PNG into multiple files and create an lv_image for each of those tiles and stitch them together to form a single rendered image. So if a small area of the image needs to be updated instead of LVGL having to load the full image only to get a small section of it now it will load a much smaller tile in order to perform the update… There is going to be a performance git for doing that but it is going to be whole image load time / x where x is the number of tiles that you have cut the image into… This is an approximation the load times would be slightly longer due to additional overhead of loading multiple times instead of loading once for one file. But the amount of data that needs to be decoded each time would be far less and that is where the time savings comes into play.
I think you did not fully understand my question/remark. Of course the rendered data will always be the same size, as this is just whatever the display driver needs (in our case RGB565).
My main problem is that, every time the SVG is rendered, it first loads the entire file in memory.
Given that it is XML, it could have used a “streaming” approach instead (or as an alternative compile option), where it loads only the parts that it needs for the parser to interpret the next bit of the file and keeps a trace of the relevant data.
That way the amount of data reserved for processing can be more deterministic.
Of course, in the current approach, the tree is first build up by the parser, then the renderer transforms the tree in to a list of render objects, and then finally these render objects are rendered one by one.
If you would want to do streaming, you would indeed have to do this throughout the whole chain (e.g. effectively calling the render methods as you parse the document).
Given the structure of PNGs, I can see why that would be more difficult to do in a streaming matter.