How to dereference a Blob?

Try to use the latest version of lv_micropython where function pointers are supported.
You would be able to call the original flush_cb from Micropython (it’s no longer a “Blob”), so you can create a delegate function that captures the data and calls the original flush_cb before it returns.

Ah, indeed now they are <function>'s. But something is broken (or I don’t understand it). This code:

orig = lv.scr_act().get_disp().driver.flush_cb
lv.scr_act().get_disp().driver.flush_cb = orig

crashes with “Backtrace: 0x00005f7f:0x3ffbbbc0 |<-CORRUPTED” on the next attempt to update the screen …

Edit: Also calling the original callback from within my own python one doesn’t work as expected. Instead my own python function is called recursively until the stack overflows.

Strange. This is working for me, at least in the simulator:

orig_flush = lv.disp_get_default().driver.flush_cb

def my_flush(drv, area, buf):
    print('Update %d bytes' % area.get_size())
    orig_flush(drv, area, buf)

lv.disp_get_default().driver.flush_cb = my_flush

On ESP32 with lv_micropython fresh from git:

import lvgl as lv
from ili9XXX import ili9341

disp = ili9341(miso=19, mosi=23, clk=18, cs=5, dc=32, rst=27, spihost=1, power=-1, backlight=33, backlight_on=1, mhz=80, factor=4, hybrid= True)

lv.init()
scr = lv.obj()
btn = lv.btn(scr)
btn.align(lv.scr_act(), lv.ALIGN.CENTER, 0, 0)
label = lv.label(btn)
label.set_text("Button")
lv.scr_load(scr)

orig_flush = lv.disp_get_default().driver.flush_cb

def my_flush(drv, area, buf):
    print('Update %d bytes' % area.get_size())
    orig_flush(drv, area, buf)

lv.disp_get_default().driver.flush_cb = my_flush

Results in:

$ ampy run simple.py
ILI9341 initialization completed
Enable backlight
Double buffer
Update 19200 bytes
Update 19200 bytes
Update 19200 bytes
...
Update 19200 bytes
Update 19200 bytes
Update 19200 bytes
Update 19200 bytes
Traceback (most recent call last):
  File "<stdin>", line 19, in my_flush
  File "<stdin>", line 19, in my_flush
  File "<stdin>", line 19, in my_flush
  File "<stdin>", line 19, in my_flush
...
  File "<stdin>", line 19, in my_flush
  File "<stdin>", line 18, in my_flush
RuntimeError: maximum recursion depth exceeded

Yes, I confirm there is a problem.
It seems to be related to the way user_data is used on ili9XXX.

I can suggest as a workaround to directly call esp.ili9xxx_flush instead of the original flush_cb. This would work for the “hybrid” mode which is the default.

import espidf as esp

...

def my_flush(drv, area, buf):
    print('Update %d bytes' % area.get_size())
    esp.ili9xxx_flush(drv, area, buf)

drv = lv.disp_get_default().driver
drv.flush_cb = my_flush

Thanks again. Indeed, that seems to work and it can also be used to restore the original pointer. A single complete frame can thus be catched with this:

def my_flush(drv, area, buf):
    print("Area:", area.x1, area.x2, area.y1, area.y2);
    espidf.ili9xxx_flush(drv, area, buf)
    # drv.flush_ready()   # use this if you don't need the screen update itself

lv.scr_act().get_disp().driver.flush_cb = my_flush;
lv.scr_act().invalidate()
lv.refr_now(lv.disp_get_default())        
lv.scr_act().get_disp().driver.flush_cb = espidf.ili9xxx_flush

The result is exactly one time the screen memory:

Area: 0 239 0 79
Area: 0 239 80 159
Area: 0 239 160 239
Area: 0 239 240 319

Now let’s see how fast this is and if I can build some kind of low FPS remote live view from this.

BTW: The correct conversion command for this to png is:

ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb565be -s 240x320 -i screen.raw -f image2 -vcodec png screenshot.png

And this is javascript that can display the raw screen data:

// Screenshot is downloaded from /screen and will be displayed in a canvas like this:
// <canvas id="cv" width="240" height="320" style="border:1px solid black;"></canvas>

var request = new XMLHttpRequest();
	  request.onreadystatechange = function() {
	      console.log("REQ", request.readyState, request.status);
	      if (request.readyState == 4) {		  
		  if(request.status == 200) {
		      var pixelArray = new Uint8Array(request.response);		      
		      var canvas = document.getElementById("cv");
		      var ctx = canvas.getContext("2d");
		      var imageData = ctx.createImageData(canvas.width, canvas.height);
		      
		      for(i = 0;i<pixelArray.length;i++) {
			  // read two bytes into one big endian uint16 
			  var pixel = (pixelArray[2*i]<<8)+pixelArray[2*i+1];
			  // convert rgb565 to rgba32
			  imageData.data[4*i+0] = (pixel >> 8) & 0xf8;
			  imageData.data[4*i+1] = (pixel >> 3) & 0xfc;
			  imageData.data[4*i+2] = (pixel << 3) & 0xf8;
			  imageData.data[4*i+3] = 0xff;
		      }
		      ctx.putImageData(imageData, 0, 0);		      
		  } else {
		      // screenshot download failed ...
		  }
	      }
	  };
	  // Send request with data
	  request.open("GET", "screen", true);
	  request.responseType = "arraybuffer";
	  request.setRequestHeader("Cache-Control", "no-cache");
	  request.send( null );

Very nice!

How fast does it work?

Haven’t measured this. But for a single screen it feels quite instant. Haven’t tried to do live video yet. For that I’d transfer the flush events as they are and only request a full screen once at the start of transmission. LVGLs way of updating only parts of the screen should fit some live remote video quite nicely. Also for me one of the biggest bottlenecks in HTTP seems to be the esp32 parsing the request header. socket.readline() seems to be very slow. Unfortunately reading bigger chunks at once results in the last incomplete chunk being lost as that read runs into a timeout and thus throws an exception. I could also try to read from on ongoing connection. But that would probably need websockets which i haven’t implemented yet.

Live video definitely needs some further thinking …

Ok, couldn’t resist. Just brute-forcing full-screen updates at 100ms interval results in ~1 FPS. Getting rid of the slow header parsing and transmitting the updated regions only could make this quite usable.

I wonder if you could use a separate port and run a “dumb” HTTP server that always serves the screen data. WebSockets are another option like you mentioned (and probably the more standard approach).

Let us know how this project comes along - I’m very interested in how LVGL can be combined with web things, since I work with both on a regular basis. :slightly_smiling_face:

You could use HTTP directly with the espidf module. It’s more efficient because everything is implemented in C (with a Micropython API).
While only HTTP client is currently supported, I believe it should be pretty easy to add HTTP server.
HTTP/2 is also supported (but not documented yet).

Another option is to avoid communicating with the web client directly.
Instead, the device could interact with some service on a remote server with TCP socket (or UDP or any other way), and the service would push the data to the web client.

For preservation I’ve added the screenshot feature to https://github.com/harbaum/LittleBlockly

i need to figure out where that APi is and how it works.

Nah … that requires some seperate service which noone will maintain and in a few months the devices stop working.

Although Websocket and similar might be a little heavy for my simple mpy webserver as this requires ssl/md5 and such. I think using the builtin web server may the way to go.

I fixed the problem on the latest version so the workaround is no longer needed.
Now this should work:

def my_flush(drv, area, buf):
    print('Update %d bytes' % area.get_size())
    global orig_flush
    orig_flush(drv, area, buf)

drv = lv.disp_get_default().driver
orig_flush = drv.flush_cb
drv.flush_cb = my_flush

It’s useful if we want a platform-agnostic “print-screen”, it should work now with any display driver.

Does MicroPython offer a way to generate image files (PNG, etc.)?

We are using the binding script to generate Micropython API for the lodepng library which provides encoding/decoding functions for PNG.
Currently I removed the encoding part to save some program RAM since we are mostly decoding PNG files, but it’s very easy to add it back if we ever want to.

The question is what further path the image takes. If you want it to become a PNG you likely want to export it from the target device. And there are two main paths: SD card and web browser.

In case of the webbrowser there’s imho no reason to do the png encoding on the target device. Instead you can download the raw image like i do now and then use javascript on browser sideo to convert to png like explained at https://stackoverflow.com/questions/923885/capture-html-canvas-as-gif-jpg-png-pdf.

If you download to SD card, then yes, a easy to use image format may be useful if you don’t want to run conversion tools on the PC. But maybe it’s easier to use a potential uncompressed format like tga.

I tried something similar. After reading the first line with the GET request I simply stopped reading/parsing further header fields and just served the page immediately. Interestingly the browsers don’t like that. If you close the connection after sending the reply and before they had a chance to send all their data they complain.

Interesting. What if you let it send the remaining data but ignored it? Or does that slow things down to the point where it doesn’t help?