Looking for advice: replicating a Canvas2D web application with LVGL

we’re planning to port a Leaflet.js application to LVGL

what library should we use for graphics?

functionally the closest match we’d be looking for is Cairo

what is that status of Thor integration? is this usable/reliable yet?

thanks in advance

Michael

I think I can help you with this. What might be the best attack is actually to use MicroPython. I say this because of it’s ease of use.

I have poked about your program. I have 29 files (3610 lines) converted to Python code. I still have to add in the LVGL bits but it should be fairly easy to do from there. Any heavy lifting math functions we can use the micropython viper code emitter that will compile the code into machine code which will allow it to run as fast as C code.

Using MicroPython also allows for rapid development, you won’t have to keep on compiling and flashing firmware. so once the library is done it can be frozen into the firmware and flashed a single time. From there the user can upload their script files to the MCU and they will be of and running.

thanks for thinking with us!

Micropython is an option I had not thought about, and it would probably be faster to get a first pass going and the overall structure worked out

For decoding MVT’s we need protobuf in one shape or the other, need to check what is available in the Micropython universe

NB the eventual target cant be assumed to run Micropython, but transliterating a working Py PoC down to C/C++ is straightforward

by now we think ThorVG is indeed the best route, so we depend on the integration of that into LVGL9, and then Micropython making use of that API - that’s a lot of moving parts

I’m not deep into LVGL Micropython but my understanding was the Py API’s are autogenerated from the LVGL C API, right? that would take some delay out of the 8/9 migration pains

I was planning to use lv_port_pc_eclipse and start from the vector_graphics example there

I think the map painting work per se does not require working on an embedded target so that can happen on Linux/MacOS ff, only afterwards squeezing it down to run on embedded hardware

that said, I’ll give it a try even if just be able to report what the state of affairs is

thanks!

Michael

Tell me of I am wrong about how this thing works. But it looks like you are using a downloaded PNG for the map and then rendering the markers on top of that.

If that is what is going on then you do not even need to worry about using vector graphics at all. When you zoom into the maps do the markers change size or dimension at all? I am pretty sure they don’t.

Make custom widgets for the different markers where LVGL does the rendering… For icons and things of that nature you would use PNG files for those. labels or text boxes that have a little pointer coming out of them kind of like a conversation balloon would be easy to render in LVGL. Those items would get rendered on top of the PNG image.

MicroPython has it’s own API that forms the bridge between python and C code. The build process for MicroPython reads the header files for LVGL and it uses that information to write the bridge code to expose the LVGL objects (functions, structures, unions, enumerations, etc…) to the python side of things. That is all it does. It makes it a lot easier from a maintenance standpoint to keep the binding up to snuff with the most recent changes to LVGL.

The current binding for MicroPython is designed so MicroPython has a dependency on the binding code. While this works it makes it more difficult to keep pace with new version releases of MicroPython. I have been working on a version of it where MicroPython is a dependency of the binding code. The 2 are decoupled so you can use any version of MicroPython you like starting from version 1.19. It also supports I8080, RGB and SPI for the data connection between the display and the MCU. While the other only supports SPI. The version I am working on also supports hop plugging of the displays. What that means is you have the ability to change the display to something different without having to compile and flash new firmware. the display doesn’t even have to be the same type of data connection for it to work.

I am an no quite finished with this next thing but when I am done with it you will be able to develop on a desktop and the same code that you are running on the desktop will also work on an MCU without you having to change anything in the source files. You send over the source files to the MCU using WiFI, BT or USB. This becomes really handy when devices are ion the field because if a software update is to be done to your library it is able to be done OTA without any wired connection and without having to compile and flash firmware. It is literally a simple file transfer using a socket connection if using WiFi or a serial connection if using BlueTeeth. A user would update their code in a similar manner. Some MCU’s have NV memory available. Fantastic place to store startup critical information like pin definitions and what display/touch interface is being used. The resolution to be displayed in. User settings can also be stored there as well but they don’t need to be. The portion of the flash storage that is not used by MicroPython ends up becoming a partition/filesystem. It is accessible using the standard methods of reading form the filesystem that CPython uses. You can store any kind of file you like. PNG files… no problem. TTF files… no issues there either.

1 Like

we are attempting to do real vector maps (prototype decoder with nanopb), that’s why I am looking for vector graphics support - not sure yet though if we are pushing the envelope of what is doable in cycles and memory usage with MVT’s

so using Micropython would required bindings of both nanopb and ThorVG

looks like nanopb is not available for Micropython so I’d have to look for another protobuf decoder

raster maps are straightforward, and using protomaps as-is is definitely doable - that would obviate the need to invent yet another scheme to transmogrify raster maps into some xyz file tree on an SD card with a gazillion files; drop in a single stock pmtiles file, done

btw mbtiles works as well on an esp32 but pmtiles is just easier to use, faster and does not require full-blown sqlite3 support

for the other use case - elevation model lookup - I needed lossless compression and looked into webp

I found webp to be significantly faster to decode, and with smaller file sizes compared to PNG’s and the webp code built as-is on the esp32’s - I really recommend adopting webp as compression option

have you come across LovyanGFX ?

it does board and display autodetection for quite a range of displays, at least in the espressif universe, but some ARM platforms as well. It works nicely - and faster - for me as a replacement for TFT_eSPI under lvgl

it is a tad rough on documentation on examples but worth a look

it’s also the upstream for the M5Stack M5Unified/M5GFX libraries

Yes I have and it will not allow hot plugging because it is a compile time language and not a runtime language. You have to define macros for the display you want to use. The firmware gets etched in stone when using that.

In the version of the binding I have written it separates the display drivers from the bus drivers. The bus drivers are written in C and they get compiled into MicroPython. The display drivers are written in python so you can change them any time the program is running and you can use different busses any time the program is running. anything that needs to be done with speed is done in C code. Thing like swapping the bytes around for each pixel when using RGB565 and SPI. That is something that takes a bit of time to do so it is best to get the best possible performance that can be gotten for doing that because it does impact the frame rate. My version also uses less flash space because it doesn’t expose large chunks of an MCU’s SDK to the Python side of things, most of which never gets used. Too much additional code that gets compiled when it is not needed.

As far as any special libraries you want to run that are written in C code. So long as they are written in C99 and not C++ or newer versions of the C syntax I can get them to run inside of MicroPython pretty easily. The code generation script used for LVGL works for other C libraries as well. Even if the bridge code needs to be written manually so long as it’s not a giant sized API that is needed it would be pretty simple to do.

All I am doing right now is directly porting your app to Python. I am not changing anything with how it works or functions. I am just doing a direct port of the javascript code. I do understand that the code will not work because of there being no DOM and things like that. I can mimic the DOM very easily by using an XML parser. Depending on how large of XML data we are talking about we can opt to use a lightweight one that I have written for Python or something a little bit more robust that is available and written in C. Duplicating the DOM functions and properties that you need is pretty straight forward to do. I had actually written such a thing for Python for use with flask and what it would do is allow a user to code in python using the javascript API for DOM. I did some pretty cool manipulation of flask that would build a document tree in a virtual space. so when a user would say click on a button an event would bubble to the server side over a websocket connection and the server would generate all of the HTML code and the path to that generated code. The webpage would only reside in memory and the path that was generated to the html document was completely random using GUID’s. Once the page loaded the path to the page would get scrapped and also the HTML source for the page as well. What I did was essentially created the ability to do server side processing of javascript instead of it being client side. This allows for only a boiler plate set of client side code that needed to be sent to the client and this removed the need to transfer a crap load of additional javascript code to cover all of the different browser types and things like that.

By also having the code generated at time of use and there not actually being any static files at all and the paths being randomly generated means that there is no real way to hack it because the paths don’t exist. If a computer tried to connect to a path that didn’t exist the IP would be banned. Because everything was server side processing for the javascript all that got sent over was mouse movements and mouse clicks. The server would figure out what actually got clicked on and would handle it in the manner it need to be handled.

This is where I am at.

still to do
48 files
3957 lines of code

done
37 files
4269 lines of code

I wish there was a javascript to python code generator that was worth a spit. I have found one and it works pretty decent actually but it has a limit to 10 runs a day of up to 4000 characters per run unless you pay for it. It’s not terribly expensive if the yearly plan is used and it supports a bunch of languages. I have to check how good it works with other languages. It might be worth paying for if it does as well as the javascript to python does.

Whatever you decide you want to use to handle the image processing end of things it can be done.

OH!!! I did want to let you know that if you do decide to go the MicroPython route you won’t have to worry about having the javascript version anymore.

Look at this.
https://sim.lvgl.io/v7/micropython/ports/javascript/bundle_out/index.html

That is MicroPython with LVGL running over HTML

It becomes an all in one solution for ya.

so as an example of what I am talking about when I say runtime vs compile time. That link I gave you for being able to run MicroPython in a webpage. There is a code editor. If you edit the code or change it and press the run button only the code in that script gets reloaded. The MicroPython firmware doesn’t need to be recompiled. By placing the display drivers in python code it is just a matter of uploading a different driver as a plain text file to the MCU or if it is known that different displays might be attached then there would be a user setting to select the display being used (holding a button down for a length of time then pressing it a specific number of times to select the display that is being used) when the unit powers on the python source file that has the driver for that display would be loaded.

1 Like

Here is the nanopb interface so it will work in MicroPython

mp_pb.c.txt (77.1 KB)

easy peasy

1 Like

whoa whoa, that’s something I got to try! I did let Petteri know, he might be interested for his nanopb project

I guess I need to indulge more into the MicroPython building business

so far I was a bit abstinent due to uneven support for WiFi and BLE, but I guess I need to give it more energy

thanks!

Michael

you talking about Leaflet.js, right? that’s quite an offer…

the actual goal is to replicate in LVGL the combination of Leaflet with the protomaps-leaflet plugin as outlined here (and raster maps as well as a byproduct)

The code in Leaflet is actually pretty simple. It’s not that complex. With some code style changes made it would be easy to write something that would read the js code to parse it.

But…

Now think about this. If leaflet could run on a micro controller using LVGL and it could load maps from say an SDCard if there was no WiFi available or it could download maps in advance and store them on an SDCard that would be pretty cool yes?

Now if the exact same application that is used to run on an MCU could also be used on a website? You could still have the javascript api to interact with it to control what it does. The javascript api sould be written to attach to the console of the application to pass commands into it and to also collect information from the application.

The core program would be identical whether it is running on an MCU or it is running on a website. only one code base to maintain. That’s what MicroPython with LVGL can give you.

You just saw how long it too me to download nanopb and to look over the header files to see if there would be any issue with it being read by the code generator in the binding an to run the script to generate the code. 15 minutes. And I had to use the bathroom during that time as well. LOL…

A lot of MCU’s have some kind of connectivity built into them and if it’s not built in there are other ways to get the data to the software, USB or SDCard. Your map data is PNG images so it’s noting too off the wall as far as processing goes. Vector maps would be better because zooming would be cleaner and also faster to do.

ok, slept over it

The MicroPython route is interesting, in particular as it might make experimenting with map styling a lot easier

my problem at hand is to create an MVT decoder and display a map on a bitmap with acceptable style, and that I’ll try with the TorVG extension and plan C/C++ for now

everything else will come in a second step - too many unknown territories for my paygrade :wink:

are you aware of any slippy tiles LVGL widget somewhere on that decoder could be retrofitted?

I’ll try to get the hang of MP in parallel

If I build MP today, is that at LVGL/master and does include ThorVG?

Michael

OK you need to dumb down on some of the acronyms. Too much google searching!!!. LOL

The current design of the MicroPython Binding will not read C++ code. Thor VG being a new thing added doesn’t have an LVGL API. to access the bits and pieces in ThorVG. That portion would need to be hand written unfortunately. I don’t believe it would be too difficult to create C wrapper functions to handle accessing the different components in ThorVG.

I can understand the use of ThorVG for the purposes of the maps themselves but the pieces added by leaflet I would image do not have to be rendered using ThorVG. Those pieces could be rendered by LVGL no?

I have not really seen the full gamut of what leaflet does. nor do I fully understand how it works. From looking at the code it looks like you are rendering on top of a png image (the map) and adjusting the locations of those renderings when the map is moved or zoomed in on. am I incorrect in my assessment of what is happening?

I’m not good at JS/HTML ff
I understand the Canvas2d api paints on a bitmap which can be slipped under a window; moving beyond a map tile’s range causes the neighbouring tiles to be requested
that’s where my Leaflet fu ends

-m

It sounds like you are on par with where I am.

Nice thing about LVGL is it has screen capture ability built into it. So when something like a map tile gets processed by ThorVG etal. and then passed into an image object in LVGL you can take a screen shot of what that image object is rendering. The data will be in the form of a single dimension uint8_t array filled with raw RGB data. That can be passed into an attached PNG encoder or bitmap encoder to produce the data you are wanting.

I am not that strong in HTML/JS but I do enough to get myself into trouble that’s for sure.