Start a discussion of the code generation for micropython

Hi

I would like to start a discussion on the way the code for the micropython bindings are generated.

The reason why I would to do that is that I’m in the middle of generating bindings for cpython and I would like to have the interface 100% same if possible (and I think it should be possible).

I see that I replicated a lot of work that is also done during the code generation for the micropython bindings.

One of the biggest challenges is parsing the c source code and extract the information need to generate the bindings.
I looked at the tool used to generate the micropython bindings and unfortunately the extraction/information gathering part is partly done during the code generation itself.
So it is currently not easy to just use the extraction and branch off into a different code generation system for cpython.

What would like to see is that the code generation is splitted into two different tools:

  1. Parse the lvgl sources and gather all the information about objects/structs/global functions and store them in an intermediate file
  2. Use that “database” generate by the first tool and generate the binding code

We could use this “database” also to generate bindings for other languages (C++, rust, …) and ensure that way that the API is as similar as possible across all languages.

Another topic would be to enhance the interface by for example adding default values for parameters for languages which have support for it (for example the user_data parameter for event bindings)
Unfortunately I have not found a way to annotate that in the lvgl source file and have a parser be able to extract the information directly from the source. But maybe somebody has an idea how to achieve that.

Also, I think we should discuss how an object oriented API should look like.
For example, the micropython bindings expose the color creation functions as global functions:

lv.color_hex (0x123)

In my binding generation these function would be class functions of the color class:

lv.color.hex (0x123)

I’m not sure what’s the better approach, but it definitely needs discussion.

I’m also convinced once we split the parsing and the code generation it will be easier to find people willing to assist in the code generation part because currently that is so deeply interconnected that people get scared to touch the code.

So, what are your though on that?
I’m looking forward to get some feedback…

Thanks a lot in advance,
martin

There is a cpython binding already. and you do not want to model after the micropython binding either. The reason why is you would have to write documentation for the entire thing because of the API being different than what LVGL is.

I have seen these bindings (you already mentioned them once). But these are a 1:1 wrapper of the c functions.
I would really like to have cpython bindings with user defined classes (in C) with uses the same API as the micropython bindings.

The reason for that is that I’m working on a project where we have two displays (one powered by 3.5" with an ESP32s3 and one 7" with an iMX8Nano)

Also I think there would be benefits if the bindings use the same API and I see no reason why that should not be possible (my bindings currently have 95% the same API already)

If there is no interest from the LVGL team thats fine with me, that I will develop my bindings just for us…

Martin

I have attempted to do that at one point in time as well. I was never able to get an answer out of the person that wrote the binding script for micropython as to how it decides the API layout. I don’t think the guy that wrote it actually knows to be honest with ya. As soon as you think you figured out how something is done then you end up finding some portion in the code that doesn’t conform to it. There are still problems with how it generates the API even to this day. It’s very inconsistent in how it works and small changes in LVGL break it.

Go for it if you are able to figure it out. There is no one that knows how the thing actually works or how it goes about it’s decision making.

I just realized that I missed the important part (matching all the functions/enums/…) to the classes because that is part of the generation process.
I will move that part out of the generation process into the paring process later today.

So, I have moved the code to assign functions and methods from the code generation script to the parser script and updated the repository:

I also commited an output file of the script (cache.json) to make it easier to check the result.

OK so you did what I did here…

I am waiting for the PR to be merged into LVGL.

It also includes all of the macros as well as all of the documentation in LVGL.

How did you tackle the forward declarations of structures? how did you tackle the enumerations not aligning with the the types that are actually used when setting an enum to a structure field or passing it as a parameter to a function? The enums are not the same type as what gets passed to a function or set to a field most of the time.

what you might be better off doing is modifying the gen_mpy script in the micropython binding so the json output from that script actually contains a correct mapping of LVGL object name to micropython object name. You can remove all of the code generation portions of the script and have it just dump the mappings so that way you know what the structure is.

I am all for making a cpython binding for LVGL. I can tell you the issues I ran into and maybe you might have ideas for solution to the problems. If that is something you would be interested in doing.

I had already started to write a script that would allow using the microython API in the binding I have written…

Th develop branch of the binding I wrote uses ctypes where the master branch uses cffi directly. while ctypes does use cffi it does so in a more pythonic manner.

OH I did want to mention. This is what I know as far as how the API of the micropython binding is done.

all widgets have a create function. lv_slider_create, lv_label_create… etc… when the binding dscript runs it locates those functions and removes the create and the lv and that is the name of the class it makes. these functions also MUST have a return type that matches the first parameter type and the function must only have a single parameter.

for functions tro be added to classes as methods the function name must begin with lv_{class_name}_ where class_name is the name of the python class that has been created. the type of the first parameter must match the return type of the create function that was used to identify the class.

enumerations are matched to classes bu using the enum item names and not the type name for the enum. The LV_ is stripped off and the name is then lower cased. if the enum item name begind with the class name then it gets added to the class. I have never managed to figure out how the binding script determines whether to use an underscore or not for the class name of the enumeration or not.

In the case of lv_color_hex. since the first parameter is a uint32_t it is not apart of the “color” class which is lv_color_t.