I was trying to access the buf_act inside the ILI9XXX driver in order to generate a screenshot. This is a raw memory Blob (basically a void*). Can I dereference this from MPY? How? Can I e.g. cast this to some memory access object?
Yes you can.
You can use the __dereference__
member function of Blob (or any struct) to get a memoryview to the data which the Blob points to.
__dereference__
receives an optional parameter for the size (in bytes) of the data.
Other C-pointer related things you can do are casting (with the __cast__
member function, or with a struct constructor) and using the C_Pointer
struct to cast to some native types, handle double pointers etc.
Here is an example.
Thanks. Indeed, this gives me access to the whole 150k screen buffer:
screendata = lv.scr_act().get_disp().get_buf().buf_act.__dereference__(320*240*2)
This can then directly be sent e.g. through a socket like so:
mysocket.send(lv.scr_act().get_disp().get_buf().buf_act.__dereference__(320*240*2))
The resulting content can then be converted to a png on a linux PC:
ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb565 -s 240x320 -i screen.raw -f image2 -vcodec png screenshot.png
But … this is not a snapshot of the entire screen. And indeed, imho lvgl only updates portions and sends these to the screen. Does LVGL have a representation of the entire current screen? Or can I ask it to create one?
Have you seen this blog post?
It looks like what is missing is something like:
lv.scr_act().invalidate()
lv.refr_now(lv.disp_get_default())
No, I haven’t seen that. Thanks for the pointer. There’s probably some more magic required as this blog post is catching the subsequent flush events. Instead reading from the buffer does not work as expected and the buffers only contains fractions of the screen contents. I’ll try to catch the flush events as well.
Hmm … I can hook into lv.scr_act().get_disp().driver.flush_cb
with my own callback. When I do a full refresh as suggested by you beforehand then I see four calls updating 19200 pixels each which is 1/4 of the screen each. This by itself makes sense … but this happens with the ili9XXX.py driver which imho is supposed to always allocate and use a full screen buffer.
Why do you think so?
factor
default value is 4 so it should allocate a buffer of 1/4 size of the screen, and not the full screen.
Argh … ‘//’ is not a delimiter for comments in python
Yes, indeed, you are right. But I am running into various small other issues. E.g. this:
orig = lv.scr_act().get_disp().driver.flush_cb
lv.scr_act().get_disp().driver.flush_cb = my_flush;
lv.scr_act().invalidate()
lv.refr_now(lv.disp_get_default())
lv.scr_act().get_disp().driver.flush_cb = orig
does install my own callback. And it is supposed to restore the old. Still I get output from my own callback and the restored original callback is never called again.
Some print’ing tells me that orig becomes a “Blob” which is expected for the hybrid driver and when I restore it lv.scr_act().get_disp().driver.flush_cb also becomes a Blob again. Still the replacement callback is working …
Try to use the latest version of lv_micropython where function pointers are supported.
You would be able to call the original flush_cb
from Micropython (it’s no longer a “Blob”), so you can create a delegate function that captures the data and calls the original flush_cb
before it returns.
Ah, indeed now they are <function>
's. But something is broken (or I don’t understand it). This code:
orig = lv.scr_act().get_disp().driver.flush_cb
lv.scr_act().get_disp().driver.flush_cb = orig
crashes with “Backtrace: 0x00005f7f:0x3ffbbbc0 |<-CORRUPTED” on the next attempt to update the screen …
Edit: Also calling the original callback from within my own python one doesn’t work as expected. Instead my own python function is called recursively until the stack overflows.
Strange. This is working for me, at least in the simulator:
orig_flush = lv.disp_get_default().driver.flush_cb
def my_flush(drv, area, buf):
print('Update %d bytes' % area.get_size())
orig_flush(drv, area, buf)
lv.disp_get_default().driver.flush_cb = my_flush
On ESP32 with lv_micropython fresh from git:
import lvgl as lv
from ili9XXX import ili9341
disp = ili9341(miso=19, mosi=23, clk=18, cs=5, dc=32, rst=27, spihost=1, power=-1, backlight=33, backlight_on=1, mhz=80, factor=4, hybrid= True)
lv.init()
scr = lv.obj()
btn = lv.btn(scr)
btn.align(lv.scr_act(), lv.ALIGN.CENTER, 0, 0)
label = lv.label(btn)
label.set_text("Button")
lv.scr_load(scr)
orig_flush = lv.disp_get_default().driver.flush_cb
def my_flush(drv, area, buf):
print('Update %d bytes' % area.get_size())
orig_flush(drv, area, buf)
lv.disp_get_default().driver.flush_cb = my_flush
Results in:
$ ampy run simple.py
ILI9341 initialization completed
Enable backlight
Double buffer
Update 19200 bytes
Update 19200 bytes
Update 19200 bytes
...
Update 19200 bytes
Update 19200 bytes
Update 19200 bytes
Update 19200 bytes
Traceback (most recent call last):
File "<stdin>", line 19, in my_flush
File "<stdin>", line 19, in my_flush
File "<stdin>", line 19, in my_flush
File "<stdin>", line 19, in my_flush
...
File "<stdin>", line 19, in my_flush
File "<stdin>", line 18, in my_flush
RuntimeError: maximum recursion depth exceeded
Yes, I confirm there is a problem.
It seems to be related to the way user_data
is used on ili9XXX.
I can suggest as a workaround to directly call esp.ili9xxx_flush
instead of the original flush_cb
. This would work for the “hybrid” mode which is the default.
import espidf as esp
...
def my_flush(drv, area, buf):
print('Update %d bytes' % area.get_size())
esp.ili9xxx_flush(drv, area, buf)
drv = lv.disp_get_default().driver
drv.flush_cb = my_flush
Thanks again. Indeed, that seems to work and it can also be used to restore the original pointer. A single complete frame can thus be catched with this:
def my_flush(drv, area, buf):
print("Area:", area.x1, area.x2, area.y1, area.y2);
espidf.ili9xxx_flush(drv, area, buf)
# drv.flush_ready() # use this if you don't need the screen update itself
lv.scr_act().get_disp().driver.flush_cb = my_flush;
lv.scr_act().invalidate()
lv.refr_now(lv.disp_get_default())
lv.scr_act().get_disp().driver.flush_cb = espidf.ili9xxx_flush
The result is exactly one time the screen memory:
Area: 0 239 0 79
Area: 0 239 80 159
Area: 0 239 160 239
Area: 0 239 240 319
Now let’s see how fast this is and if I can build some kind of low FPS remote live view from this.
BTW: The correct conversion command for this to png is:
ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb565be -s 240x320 -i screen.raw -f image2 -vcodec png screenshot.png
And this is javascript that can display the raw screen data:
// Screenshot is downloaded from /screen and will be displayed in a canvas like this:
// <canvas id="cv" width="240" height="320" style="border:1px solid black;"></canvas>
var request = new XMLHttpRequest();
request.onreadystatechange = function() {
console.log("REQ", request.readyState, request.status);
if (request.readyState == 4) {
if(request.status == 200) {
var pixelArray = new Uint8Array(request.response);
var canvas = document.getElementById("cv");
var ctx = canvas.getContext("2d");
var imageData = ctx.createImageData(canvas.width, canvas.height);
for(i = 0;i<pixelArray.length;i++) {
// read two bytes into one big endian uint16
var pixel = (pixelArray[2*i]<<8)+pixelArray[2*i+1];
// convert rgb565 to rgba32
imageData.data[4*i+0] = (pixel >> 8) & 0xf8;
imageData.data[4*i+1] = (pixel >> 3) & 0xfc;
imageData.data[4*i+2] = (pixel << 3) & 0xf8;
imageData.data[4*i+3] = 0xff;
}
ctx.putImageData(imageData, 0, 0);
} else {
// screenshot download failed ...
}
}
};
// Send request with data
request.open("GET", "screen", true);
request.responseType = "arraybuffer";
request.setRequestHeader("Cache-Control", "no-cache");
request.send( null );
Very nice!
How fast does it work?
Haven’t measured this. But for a single screen it feels quite instant. Haven’t tried to do live video yet. For that I’d transfer the flush events as they are and only request a full screen once at the start of transmission. LVGLs way of updating only parts of the screen should fit some live remote video quite nicely. Also for me one of the biggest bottlenecks in HTTP seems to be the esp32 parsing the request header. socket.readline() seems to be very slow. Unfortunately reading bigger chunks at once results in the last incomplete chunk being lost as that read runs into a timeout and thus throws an exception. I could also try to read from on ongoing connection. But that would probably need websockets which i haven’t implemented yet.
Live video definitely needs some further thinking …
Ok, couldn’t resist. Just brute-forcing full-screen updates at 100ms interval results in ~1 FPS. Getting rid of the slow header parsing and transmitting the updated regions only could make this quite usable.
I wonder if you could use a separate port and run a “dumb” HTTP server that always serves the screen data. WebSockets are another option like you mentioned (and probably the more standard approach).
Let us know how this project comes along - I’m very interested in how LVGL can be combined with web things, since I work with both on a regular basis.