Thoughts on MicroPython support

I want to share some work I’ve been doing on Micropython support. I take a different approach to the official Micropython binding: I hand-craft a more Python-friendly API instead of generating code from the C API. Porting the whole API is still a work in progress but I have a few examples written.

Here is the first “getting started” example from LVGL in 2 lines!

lvgl.display.screen.bg_color = 0x003a57
lvgl.Label(text="Hello world", text_color=0xffffff, align=lvgl.ALIGN_CENTER)

You can check out some other examples here, and download and try it out for yourself if you’re interested too.

I’m really interested in your feedback. Do you like this approach? Would you use it? Thanks.

where is the code for what you are doing? showing just the examples doesn’t help understand the mechanics you are using and if it will be a better solution than what is currently being used.

Well, as I see, it will be a HUGE task…

  1. First of all implementing/wrapping all the LVGL functions/API into MP module to work with MicroPython, then also exposing enums, structs, etc to MicroPython codes…
  2. Second: LVGL is continously changing: function names, their parameters, enums, sometimes there are architectural changes (last big change was using draw_buf “everywhere”), so maintaining in your repo all these changes manually in C files will be also a continous task.

That’s why the original lv_binding_micropython repo uses auto-generated code.

So I think without some kind of automated scripts / auto-generated code, all the above mentioned steps will be difficult and time-consuming to do manually… that’s my opinion, sorry.
Even in other programming platforms we use auto-generated codes, that makes life easier, and saves lot of developer time.

I don’t image the API is going to change like it has between version 8 and version 9. One time of doing that is enough for most people. While the generated code is nice to have it almost makes it harder to maintain due to the cryptic script that generates the code. You would have to have some kind of knowledge of pycparser to understand why I say that. The other thing is the code generation has to do things in a more universal way which causes more overhead than really needs to exist. Plus you don’t get to take advantage of dual core MCU’s. MicroPython is written to utilize only a single core and when running LVGL currently it ends up being bound to the same core of the processor. This slows things down quite a bit. What @gneverov has done allows LVGL to run on the second core making things faster.

Whiel this is not done currently it should be thought about being done is separating things into their own modules. This would reduce the memory footprint so only the parts that are being used actually get loaded. I am talking about all of the boilerplate code needed for the bridge between LVGL and MicroPython. It’s something like 80K lines of code that currently loads all in one shot, even things a user may not use at all. By separating things into different modules would first off reduce the import time it would also reduce the RAM use. That being said, more memory would be consumed if every single module was imported but the chances of that taking place would be extremely rare.

1 Like

There is always a cost to doing something the “easy” way and that cost comes as increased resource use and reduced performance.