This is not a LVGL project, but I posted here cause it can be interesting for LVGL users that want to add an AI assistant to their IoT projects (ESP32 or PICO_W…)
The idea is to connect to an OpenAI service and create a “fake conversation” where you give your HAL to the assistant and it helps you to the user to program your device. Here is a demo for the PC:
Each user input is sent to OpenAI and the response is then sent to the micropython interpreter.
It can manage standard hardware without problems (Pin, ADC, Timer…), libraries like network, sockets, framebuf, uasyncio…and also you can add any example for your custom API.
It “almost” works with LVGL, but most of the times fails cause it uses different LVGL API version than you expect.
Here is a similar example that creates a playable pong game every minute, with micropython framebuf.
https://youtu.be/TgRffMAOubQ
The code to make an OpenAI request is based on micropython urequests (requires your own API key).
import urequests as requests
import ujson as json
def codex( key, prompt, max_tokens, stop ):
url = "https://api.openai.com/v1/completions"
headers = {
"Content-Type": "application/json",
"Authorization" : "Bearer " + key
}
data = {
"model": "code-davinci-002",
"prompt": prompt,
"temperature": 0.5,
"max_tokens": max_tokens,
"top_p": 1,
"frequency_penalty": 0.0,
"presence_penalty": 0.25,
"stop": [stop]
}
resp = requests.post( url, headers=headers, data=json.dumps(data) )
if( resp.status_code != 200 ):
print( "resp.status_code != 200", resp.status_code )
text = resp.json()["choices"][0]["text"]
finish_reason = resp.json()["choices"][0]["finish_reason"]
return text, finish_reason