Is rp2 port dysfunctional at runtime (?)

I tried, after a weeks of not syncing, to fetch the latest lv_micropython master and build via make -C ports/rp2 USER_C_MODULES=../../lib/lv_bindings/bindings.cmake -j16 all (I only adjusted color depth to 16 with bits swapped in lv_conf.h). The build runs fine, I upload firmware.uf2 to the board, but then running the most trivial code like:

import lvgl as lv
lv.init()
scr=lv.obj()

will freeze the board.

It also happened, with slightly out-of-date (and now unreproducible) build, that I got memory allocation error allocating 21232342323 bytes (or something similar).

Is someone actively running the rp2 port? No issues?

The best way to dive into this problem is with a debugger… But I dont have for rp2040. I checked my lvgl clone changes and compare with current lv repo, but I dont know where this problem comes from.

Before get correct fw, I had some problems with arabic fonts, so I removed from my fw, but with them I still can call to lv.obj and the problem appears during text rendering. Also saw some crazy memory allocs, but finally understand that thonny ide cause them.
You can find the posts on topic “Build LVGL for Raspberry Pi Pico”,( 1-May message)

Also I think lvgl can report more info if you enable log and trace features on the lv_conf file

@eudoxos

Did you initialize some display? Or are you running this without any display driver?
Without any display driver I think LVGL crashes when you try to create LVGL objects.
If you already initialized some display, do you reach its callbacks? Do you get anything on the display?

Other ideas:

  • Did you try upstream Micropython v1.17 (without LVGL)?
  • Did you try latest LVGL on rp2 without Micropython?
  • Can you connect a debugger to your rp2 board?

I checked @jgpeiro’s older posts, does not look similar to the issue I have.

@amirgon, uPy works fine (with LVGL, but not running LVGL code), testing the display driver(s) is not a problem (see https://github.com/eudoxos/pico-st7789-driver-lvgl/issues/1 for screenshots) but instantiating lvgl object fails. I did not try LVGL on rp2 w/o uPy (not time for that at the moment), no debugger (I could use another Pico as the probe, but no time for that either).

Okay, so after updates, recompilation and such, it seems actually LVGL objects are constructed fine but the freeze happens after flushing the first LCD tile (using driver callback). The disp_drv_flush_cb done line, after which nothing else appears, is from this line. It is an indication something might be up in the LCD driver.

lv.init()
disp_buf_t.init()
driver
lv.obj()
... okay!
.
disp_drv_flush_cb 0 319 0 63
disp_drv_flush_cb done

Why don’t you always call lv_disp_flush_ready(disp_drv) ?
Are you sure about conditioning it by disp_drv.flush_is_last()?

Thanks for reviewing the code! I tried to follow the docs here Display interface — LVGL documentation though I must have understood it wrong. I thought flush_ready was to be called when all dirty tiles will have been refreshed.

Without the flush_is_last conditional, I don’t get the “freeze” anymore :slight_smile: I get green screen, but that looks like more doable. If I need something, I will post here again.

Working nicely now. Any advice where to find the lv_utils module? I tried import lv_utils as in the uasyncio_example1.py but the module is not to be found. Does it need to be ported, or somehow enabled? I grepped sources and did not find much.

I would like to use this construct instead of hand-written loop:

lv_utils.event_loop(refresh_cb=lv.task_handler,asynchronous=True)
uasyncio.Loop.run_forever()

Never mind, I just figured out it does not get baked into the fw and I need to copy it by hand. Please correct me if I am wrong.

So the display is working!
Very nice! :clap:

It should be a frozen module.
You can add a softlink under modules/, or add it in the manifest file.

Thanks for the hints. include runs the file through python interpreter (apparently), freeze needs an entire directory. I will copy by hand for now. (Perhaps a symlink in the git would be a good way to go for the LVGL repo of uPy, but AFAIK symlinks are still handled in funny ways in windows when checked out from git, so dunno.)

What would be the next step to integrate the driver into LVGL micropython repo properly? I would actually like someone to review and stress-test it a bit.

One more technical thing. The Pico platform’s SPI uses DMA for all transfers greater than 20b automatically. @jgpeiro’s explicit DMA has the advantage of doing the transfer in background. Another option for backgrounding the SPI transfer would be to actually use the second core (which is unused for anything else) to run DMA thread on it, which would do just the transfer or wait for the next data. Does that sound like something worth considering? Is it more complex than it sounds? The advantage would be any platform with multiple cores supported by micropython could run that.

That’s not scalable. Every time we update lv_utils.py in the future you’ll have to copy it again.
We are using symlinks on esp32 and freeze on stm32, both work fine. You can freeze independent files, not only directories, see how this is done on the stm32.

Copying files manually is never a good solution.

Create the driver under driver/rp2, test it, document it and freeze it on lv_micropython.
This would require first a PR on lv_binding_micropython, then a PR on lv_micropython.

An MCU core is used for running logic, arithmetic, data processing etc.
DMA is a hardware machine that copies memory without the CPU intervention, to free the CPU for other tasks.
Usually initiating DMA requires a very quick setup to tell the DMA engine what memory to move to where (to SPI in this case), and afterwards the transfer is done in the background without the CPU intervention.

So unless I’m missing something here, it doesn’t really make sense to “run the DMA on the other core”. The DMA setup is very short and could be done on the same core LVGL and Micropython are running. The transfer itself should not affect the core at all. Almost any modern MCU supports DMA so why waste the other core for doing that work?

So unless I’m missing something here, it doesn’t really make sense to “run the DMA on the other core”. The DMA setup is very short and could be done on the same core LVGL and Micropython are running. The transfer itself should not affect the core at all. Almost any modern MCU supports DMA so why waste the other core for doing that work?

I was not entirely clear. Regular machine.SPI.write on the rp2 port will use DMA for the data transfer, but the call is blocking, not returning to python until the transfer itself is done (dma_channel_wait_for_finish_blocking). If this were running in another thread, it would not matter (or if machine.SPI.write were async :wink: ).

Using DMA explicitly as @jgpeiro did does the transfer properly in the background, of course, but it comes at the cost of (admirable) low-level, platform-specific code.

I see lv_utils symlinked in esp32; will do like that.

That’s a shame, probably worth a PR on upstream Micropython.

Anyway, you can create a thread in Micropython and block it on the same core, no need to take over the other core.

Another option is to create a C display driver where you could use DMA non blocking.

The driver itself is platform-agnostic, only the DMA class is port-specific (and optional). So it might go to drivers/generic, right?

Yes. If all modules used in your driver are platform independent (can have platform specific implementation but must have platform independent API) then in can definitely go to driver/generic.