A randomized JSON-based UI generator to synthesize LVGL data for ML training

Hello,

to create multiple user interfaces for my machine learning project, either random or via design files, I have created a UI generator.
This generator is used in my project to synthesize user interfaces to train a machine learning model on classification of LVGL widget types and also localization of them via bounding boxes.

It is still a large work-in-progress, but I thought I might share it here for people to see. Specifically the UiLoader class might come in handy, if you want to quickly draft up a UI design using JSON.

This is by no means perfect or even good at all, it is just something I built to synthesize somewhat realistic designs/layouts in a simplified manner.
It is also solely focused on the visual aspect, as nothing functional is needed for me to create a screenshot.

Here is the generator project:
(It only runs via the linux port of lv_micropython and I also do not intend to support more)

The random mode using the RandomUI class is still a work in progress. It can only place widgets randomly using absolute values and doesn’t randomize style properties (i.e. colors, sizes, etc.).

I might update this post at some point if it has gotten better.

Here are a few screenshots of what it created:
(Sources for them are in the designs folder, but I was too lazy to label the images on which is which)
grafik




grafik
grafik
grafik

2 Likes

Also as a note:
Yes, I did create most of these JSON files by instructing ChatGPT. That AI might be able to use what I’ve built here, but it certainly isn’t a great designer.
If I wrote them myself, they’d probably be more refined looking, but I didn’t have that much time to play around as I need to focus on the ML aspect of my project.

The following widget types are currently implemented for the UiLoader (JSON):

  • arc
  • bar
  • button
  • buttonmatrix
  • calendar
  • checkbox
  • dropdown
  • label
  • led
  • roller
  • scale
  • slider
  • spinbox
  • switch
  • table
  • textarea (not quite refined yet)

I can’t really provide instructions on how to use it, my time is currently quite limited, but I might in the future.

In general, styles are applied via the set functions, and you’ll need to use the correct property name (i.e. the same way it is written as the name of the function)
Also, you can only apply one style per widget, but I might make that a list in the future, seems like an easy fix.
Not all properties are properly supported yet and there’s a bit of special handling involved (for example, to parse the hex color from the string, which only works if the property contains the word ā€˜color’)

For the options of specific widgets, I’d recommend you to check out the corresponding ā€œcreate_ā€ function which will read the element of said widget.

That’s all I can say for now, it is quite a raw project to say the least, so do not expect much from it.

If you are curious about my ML project - here is the root of my evil:

The projects README have gotten an extensive facelift, check out the whole project here:

1 Like

Wow, how could I miss this topic!?

Do you have a video to see it in action?

1 Like

I could make one, but for the most part you can take a look at the datasets it produced.

It isn’t that exciting yet, since the product of the thesis is pretty raw.

I’ve uploaded two main datasets that it created:

DESIGN dataset: (meaning the output was based on JSON files of the design mode)

RANDOM dataset: (meaning the output was based of the random mode)

You can also check out the diagrams I’ve made, in the doc:

They’re also incorporated in the README of the actual main project of the thesis:

The design mode of the generator is probably best illustrated in this widget_showcase.json:

Which produces this output:

But yeah… it still has some issues, which are highlighted in the README. (mostly due to memory allocation and possibly race conditions)


I should mention that the widget_showcase.json is outdated in terms of the schema. I have made width and height properties of the widget JSON object itself, since ChatGPT really had troubles in always applying width and height.
But the generator does not really care that much, since it will look up setters of found style properties anyway, so it still works fine.
So you’ll find that the schema is actually a lot stricter than what the code actually requires.

It is hard finding a balance in what works for humans and what I needed to modify so that the ChatGPT would adhere to rules and not ruin my bank account due to unusable GPT responses.

I also haven’t gotten around to deleting JSON examples from the repository of previous generator versions which do not work anymore.

Thanks, see it now. That’s amazing! :rocket:

I’ve shared it internally because this cool AI support could be really interesting for in the future.

1 Like

Well thank you, I feel honored ^^

I hope on being able to improve it in the future, it’s a thing I am hoping to expand on at my workplace, because we’re actually dealing with rather critical UI (nurse call systems) and reliability is key.

My paper is finally done.

It can be viewed here:

Gawd, I am glad I survived the process ^^

1 Like

Congratulations! :tada:

I’ve cloned lvgl-ui-detector but got this error on poetry install:

$ poetry install
Creating virtualenv ui-detector-b_Qs-a7s-py3.10 in /home/kisvegabor/.cache/pypoetry/virtualenvs
Installing dependencies from lock file

pyproject.toml changed significantly since poetry.lock was last generated. Run `poetry lock [--no-update]` to fix the lock file.

Huh, that is odd, but you should be easily able to fix it by running the suggested commands poetry lock, which will override the lock file contents with that the project expects.
After that, the poetry install command should work.

Usually this happens if I somehow missed properly updating the lock file with what I currently have in the virtual environment.

I can make a fixing commit later on with the proper versions placed in the lock file.


EDIT:

Ah I know why it happened. The poetry file in the MAIN repository named lvgl-ui-detector is actually deprecated, and I haven’t got around to remove it yet. That’s because originally all the ML code was in the main project, but it was causing issues with the whole poetry structure, so I moved it into a submodule.

There is two submodules basically:
lvgl_ui_generator_v2 which contains a poetry file to provide some simple build mechanisms via invoke
and
ui-detector which contains the main code regarding the ML project for generating, training, etc.

Both of these submodules have their own poetry file and their own environment, and need to be initialized in their respective folders.

TL;DR

cd ui-detector
poetry install

or for the generator project:

cd lvgl_ui_generator_v2
poetry install

I would also recommend setting local creation of the .venv to easily find the folder to delete:
poetry config virtualenvs.in-project true
(otherwise its in some cache folder of your home directory as seen in your output and I personally find that very distracting)

I have now also added API documentation for the two main projects:

At some point I’ll give it a professional look, but for now this will suffice.

Thank you, but still not working :frowning:

lvgl-ui-detector/lvgl_ui_generator_v2 $ (main) poetry install
The currently activated Python version 3.10.12 is not supported by the project (>=3.11,<3.12).
Trying to find and use a compatible version. 

Poetry was unable to find a compatible version. If you have one, you can explicitly use it via the "env use" command.

Well… that’s unfortunate, but I admit I always forget setting the general required python version to a range instead of limiting it to the one I currently use. You have 3.10 installed, but I mainly work in 3.11 nowadays.

If you open the pyprojec.toml you can just change the python version ^3.11 to >=3.9. It should work.

Thank you!
Now having an issue with the lock file and running poetry lock ask for keys that I don’t have :frowning:

Anyway, I was just curious to see it the practice. I don’t bother you with making it work on my machine :slight_smile:

Weird, that is beyond me on why that happens. There’s no configuration in the project that would include any kind of keys or private repositories.

To be honest, in the end, the involvement of poetry is meant for ease-of-use, I’m a bit shocked at how non-descriptive these error results can get on someone else’s PC.

If you do not want to bother with all of this, you can always just manually look at the included pyproject.toml of either of the two submodules:

And then just install the dependencies yourself in a python virtual environment.

I can 100% guarantee that the code works with the versions written in those project files. I wanted to include a container in both projects to ease with execution or development, but it became too much of a hassle to deal with during paper development.
I’ll note that for the future though.

The project is gonna collect some dirt for a bit now, since we’re still far from using it for testing of our new product line. Once testing (me) will get more involved with that, I will make further updates and changes to this repository.

I’d assume this to happen sometime at the end of this year, but there’s no guarantee.

1 Like

Thanks! In the future this kind of tool might be interesting for us too, so I’ll try it again when we get there.

1 Like

Yea for sure, but I assure you, you can get started right away, because it ALL boils down to the dataset.

So, the more data of LVGL user interface one can gather by the community, create manually or generate in simulated environments, the better.
In the paper, I’ve used from 150 up to 500 screenshots, and performance was pretty good in the validation set.
It is just that the model’s realistic performance (i.e. real user interfaces with custom designs) was very bad, because the training data did not contain enough information for proper detection of widgets in realistic environments where visualization varies greatly.

All it needs are full screenshots of the windows or window sections, either labeled or not.
Preferably all of those are of the same size, or at least all are square, but it’s always possible to pre-process those images for training.

1 Like