to create multiple user interfaces for my machine learning project, either random or via design files, I have created a UI generator.
This generator is used in my project to synthesize user interfaces to train a machine learning model on classification of LVGL widget types and also localization of them via bounding boxes.
It is still a large work-in-progress, but I thought I might share it here for people to see. Specifically the UiLoader class might come in handy, if you want to quickly draft up a UI design using JSON.
This is by no means perfect or even good at all, it is just something I built to synthesize somewhat realistic designs/layouts in a simplified manner.
It is also solely focused on the visual aspect, as nothing functional is needed for me to create a screenshot.
Here is the generator project: (It only runs via the linux port of lv_micropython and I also do not intend to support more)
The random mode using the RandomUI class is still a work in progress. It can only place widgets randomly using absolute values and doesnāt randomize style properties (i.e. colors, sizes, etc.).
I might update this post at some point if it has gotten better.
Here are a few screenshots of what it created: (Sources for them are in the designs folder, but I was too lazy to label the images on which is which)
Also as a note:
Yes, I did create most of these JSON files by instructing ChatGPT. That AI might be able to use what Iāve built here, but it certainly isnāt a great designer.
If I wrote them myself, theyād probably be more refined looking, but I didnāt have that much time to play around as I need to focus on the ML aspect of my project.
The following widget types are currently implemented for the UiLoader (JSON):
arc
bar
button
buttonmatrix
calendar
checkbox
dropdown
label
led
roller
scale
slider
spinbox
switch
table
textarea (not quite refined yet)
I canāt really provide instructions on how to use it, my time is currently quite limited, but I might in the future.
In general, styles are applied via the set functions, and youāll need to use the correct property name (i.e. the same way it is written as the name of the function)
Also, you can only apply one style per widget, but I might make that a list in the future, seems like an easy fix.
Not all properties are properly supported yet and thereās a bit of special handling involved (for example, to parse the hex color from the string, which only works if the property contains the word ācolorā)
For the options of specific widgets, Iād recommend you to check out the corresponding ācreate_ā function which will read the element of said widget.
Thatās all I can say for now, it is quite a raw project to say the least, so do not expect much from it.
If you are curious about my ML project - here is the root of my evil:
But yeah⦠it still has some issues, which are highlighted in the README. (mostly due to memory allocation and possibly race conditions)
I should mention that the widget_showcase.json is outdated in terms of the schema. I have made width and height properties of the widget JSON object itself, since ChatGPT really had troubles in always applying width and height.
But the generator does not really care that much, since it will look up setters of found style properties anyway, so it still works fine.
So youāll find that the schema is actually a lot stricter than what the code actually requires.
It is hard finding a balance in what works for humans and what I needed to modify so that the ChatGPT would adhere to rules and not ruin my bank account due to unusable GPT responses.
I also havenāt gotten around to deleting JSON examples from the repository of previous generator versions which do not work anymore.
I hope on being able to improve it in the future, itās a thing I am hoping to expand on at my workplace, because weāre actually dealing with rather critical UI (nurse call systems) and reliability is key.
Iāve cloned lvgl-ui-detector but got this error on poetry install:
$ poetry install
Creating virtualenv ui-detector-b_Qs-a7s-py3.10 in /home/kisvegabor/.cache/pypoetry/virtualenvs
Installing dependencies from lock file
pyproject.toml changed significantly since poetry.lock was last generated. Run `poetry lock [--no-update]` to fix the lock file.
Huh, that is odd, but you should be easily able to fix it by running the suggested commands poetry lock, which will override the lock file contents with that the project expects.
After that, the poetry install command should work.
Usually this happens if I somehow missed properly updating the lock file with what I currently have in the virtual environment.
I can make a fixing commit later on with the proper versions placed in the lock file.
EDIT:
Ah I know why it happened. The poetry file in the MAIN repository named lvgl-ui-detector is actually deprecated, and I havenāt got around to remove it yet. Thatās because originally all the ML code was in the main project, but it was causing issues with the whole poetry structure, so I moved it into a submodule.
There is two submodules basically: lvgl_ui_generator_v2 which contains a poetry file to provide some simple build mechanisms via invoke
and ui-detector which contains the main code regarding the ML project for generating, training, etc.
Both of these submodules have their own poetry file and their own environment, and need to be initialized in their respective folders.
TL;DR
cd ui-detector
poetry install
or for the generator project:
cd lvgl_ui_generator_v2
poetry install
I would also recommend setting local creation of the .venv to easily find the folder to delete: poetry config virtualenvs.in-project true (otherwise its in some cache folder of your home directory as seen in your output and I personally find that very distracting)
lvgl-ui-detector/lvgl_ui_generator_v2 $ (main) poetry install
The currently activated Python version 3.10.12 is not supported by the project (>=3.11,<3.12).
Trying to find and use a compatible version.
Poetry was unable to find a compatible version. If you have one, you can explicitly use it via the "env use" command.
Well⦠thatās unfortunate, but I admit I always forget setting the general required python version to a range instead of limiting it to the one I currently use. You have 3.10 installed, but I mainly work in 3.11 nowadays.
If you open the pyprojec.toml you can just change the python version ^3.11 to >=3.9. It should work.
Weird, that is beyond me on why that happens. Thereās no configuration in the project that would include any kind of keys or private repositories.
To be honest, in the end, the involvement of poetry is meant for ease-of-use, Iām a bit shocked at how non-descriptive these error results can get on someone elseās PC.
If you do not want to bother with all of this, you can always just manually look at the included pyproject.toml of either of the two submodules:
And then just install the dependencies yourself in a python virtual environment.
I can 100% guarantee that the code works with the versions written in those project files. I wanted to include a container in both projects to ease with execution or development, but it became too much of a hassle to deal with during paper development. Iāll note that for the future though.
The project is gonna collect some dirt for a bit now, since weāre still far from using it for testing of our new product line. Once testing (me) will get more involved with that, I will make further updates and changes to this repository.
Iād assume this to happen sometime at the end of this year, but thereās no guarantee.
Yea for sure, but I assure you, you can get started right away, because it ALL boils down to the dataset.
So, the more data of LVGL user interface one can gather by the community, create manually or generate in simulated environments, the better.
In the paper, Iāve used from 150 up to 500 screenshots, and performance was pretty good in the validation set.
It is just that the modelās realistic performance (i.e. real user interfaces with custom designs) was very bad, because the training data did not contain enough information for proper detection of widgets in realistic environments where visualization varies greatly.
All it needs are full screenshots of the windows or window sections, either labeled or not.
Preferably all of those are of the same size, or at least all are square, but itās always possible to pre-process those images for training.