Understanding LVGL's rendering process

I have a library I built which unfortunately overlaps LVGL somewhat in functionality. Originally it started as just a stateless graphics library for IoT which made it far different from LVGL’s stateful graphics with its controls and widgets. However, Espressif (ESP32) changed their LCD APIs such that at the end of the day, stateless graphics libraries don’t work very well by themselves. (Long story), so I had to add a user interface/stateful library on top.

To that end, I have dirty rectangles that trigger controls to redraw when a portion of the screen needs to be refreshed, like LVGL.

I currently have a naive algo for this, and I’d like to improve it by subdividing my screen into transfer regions more efficiently, so I can redraw controls as little as possible. I’ve potentially settled on a kd-tree approach and I’m wondering if LVGL uses a similar mechanism of subdividing and sorting rectangles.

I saw that it sorts, I think? But I’m getting lost in the code. Can someone provide me with a high level explanation of how it works, just so I know I am on the right track or not?

Hi, I am very interested in your idea, however, using kdtree for screen segmentation to reduce redraw pixels, I do not understand how to do it, can you tell me more about it