Build your UI with AI

Recently I got an invitation to OpenAI Codex, an AI tool that learn from all Github open source code and is capable to generate new code… I cant stop to test it!..
One of the things I tried is to generate UIs directly from description. and it worked! It works for python wxwidgets, pyqt and even for lvgl. Not always of course, but simple examples work. I dont know why but it generates v7 API…
So given the prompt:

# This example shows a window with 5 elements:
# - A dropdown list, named "com_port", with different COM ports.
# - A dropdown list, named "test", with different tests.
# - A button, named "exec_one" (with same text), to execute only one test.
# - A button, named "exec_all"(with same text), to execute all tests.
# - A label, named "result"(with same text) with the test/s result.
import lvgl as lv
scr = lv.obj()

Generates:
(in the generated code, I added the button labels cause tries to set with button.set_text)

If I change to “import wx” generates a complete wxwidgets running app:

Importing PyQt5 also works

The lvgl can be readed on the screenshot, but wx and qt are a little bit longer. I think is also curios for ppl read them so I attached here. These examples arent modified by me (exept that on wx example, it generated python2 print statemets, so I added parentesis )
UI Codex.zip (2.5 KB)

1 Like

Interesting! I personally suspect the reason why it uses the v7 API is because it is learning from code examples on GitHub, and there are probably more of these for v7 than v8.

Yes, it should be this. Also the wxwidgets examples can be older and that is why it generates python2 print statements.

OMG! That’s incredible. I’ve just discovered OpenAI codex last wek and dream about how could it work with LVGL. I never thought that it’s so simple.

Thank you very much for sharing! I’ve also apply for the beta test.

Glad to hear that OpenAI Codex work with LVGL.

it should not only from Github but from search engines like Google, the landing result is now always to v7.

1 Like

And Im still here three years later. The problem still not fully solved, but it progressed a lot.

I came back here cause now you can sketch an UI with OpenAI news image generator and obtain something very professional:
image

Then pass this image directly to Claude and got a well written 300 lines of UI initialization code (maybe you need to fix few API calls)

And, claude will put all the elements code, but make some mistakes, you can manually adjust the desing on the simulator

Some other mocks you can easily generate:












That’s awesome! It’s great to see how AI tools like OpenAI Codex can be so useful in quickly generating code from simple descriptions. The fact that it can generate UI components for different frameworks like wxWidgets, PyQt, and LVGL directly from descriptions is impressive. It’s fascinating to see how the AI adapts to various libraries and generates code that can run without much manual intervention.

I’ve also had some success with AI tools for coding, and while they are great for generating simple UI layouts, I do notice that the more complex the request, the more tweaks you need to make. For example, sometimes the generated code uses older versions of APIs, like you mentioned with the v7 API in LVGL. That’s something to watch out for, especially when libraries evolve quickly.

It’s also a fun experiment to see how it generates code for different frameworks just by switching imports. I imagine the PyQt5 example you got was particularly interesting since that’s one of the more feature-rich UI frameworks.

It’s exciting to think about the future possibilities for developers who can use these tools to speed up repetitive tasks, like setting up UI elements, while still having the flexibility to tweak the generated code for more complex features.

Thanks for sharing this, it definitely gives me some inspiration to experiment more with AI-assisted coding!

@Ovestint1964 have you generated your reply with AI because of a language barrier?