I am just getting started with lvgl and I’m facing what is probably a simple issue. I am using a custom TI AM335x processor platform with a custom display board. The AM335x is running Linux 4.1.6 (based off TI SDK) and a buildroot-generated filesystem. The display board receives run-length encoded pixel data from my AM335x processor via USB, decodes it and writes the pixel data to the screen (480x272, 16bpp).
I’ve managed to get the fbdev demo running, but I’m experiencing some tearing and some strange artifacts which I believe is attributed to overloading my USB bus.
My driver keeps a shadow buffer which it uses to compare to the current framebuffer to determine the ‘damaged’ area which must be redrawn.
I am not sure how I am supposed to notify the driver that all of the little changes are done and it can redraw the screen. I modified the lv_drivers/display/fbdev.c fbdev_flush function to call an ioctl inside my driver to check for the damaged area and redraw, but this seems to be getting called every 35-40ms regardless of what I have set the LV_DISP_DEF_REFR_PERIOD define (currently at 100).
My specific question is how do I slow down the actual refresh rate to prevent my driver from being overloaded? I’ve read the Drawing documentation, but this suggests that all of these small modifications should be happening in the local buffers before being pushed into the mmap memory until the refresh period is reached.
What do I need to do to prevent this from being called so often? Am I completely misunderstanding how this all works?
Ultimately I can get the output to look correct if I force my Linux driver to redraw the entire screen everytime I call my update ioctl, but this expectedly causes tearing. I think some of this will be a non-issue as I do not think any sort of animation will be required, but I would like to better understand how this all works together so I feel more comfortable about what I am doing.
Thanks in advance, and sorry for the wall of text!
If I have set LV_DISP_DEF_REFR_PERIOD to 100, would you expect my screen to update any faster than 10Hz? From what I can tell I am seeing closer to 25-30Hz refresh which is causing some issues with my system.
I had modified my driver to only re-draw the damaged rectangles as sent from the flush function, but this was causing all kinds of unexpected behavior due to the frequency of the writes.
I’ve been trying to verify that I am not doing that. To reiterate, I am using the fbdev demo application, and I haven’t modified any of the source aside from the fbdev.c file’s flush (to add my ioctl), main.c to add a second buffer, and the lv_conf.h.
I’ve just now stored the return pointer from lv_disp_drv_register, and if i print disp->refr_task->period it tells me 200.
Thanks for testing this for me. That looks like what I would expect to see. I snapped the time at the end of fbdev init to use as a reference point, and all of these prints below are the timestamp at the end of the flush() with reference to the fbdev_init timestamp. Please note that my refresh period is 200 now.
This is the demo application. It looks like the screens stay active for about 3 seconds each, so from 0.01 - 3.05 it’s the Write screen with cursor blinking, then from 3.05 - 3.30 it transitions to the List screen, from 6.13 - 6.35 it transitions to Chart, then 9.18 to 9.78 it transitions back to Write where the cursor blinks again.
Does that make sense? I am making an assumption that you are familiar with this demo, but I can take a video of it if that helps.
Sorry, I’ll try to re-phrase my issue and set aside the what-i’ve-trieds for a moment.
I have a display board with a cortex-m4 micro on it that receives run-length encoded pixel data from my main processor via USB. Originally my Linux driver was written to scan the framebuffer, determine damaged area, and update the processor with the new rectangle’s pixel data. As I see lvgl already does this (and as confirmed above), I re-wrote the driver to accept the x/y start/end coordinates from the ioctl call and run-length encoded that without checking for damage.
This caused all kinds of artifacts on the screen, and my assumption was that my driver was trying to write too much data to the micro in too short a period. Looking at these timestamps I’m not so sure this is the case. Regardless, if I change my driver to ignore the damage coordinates and simply re-encode the entire screen every time I am able to get the demo to play in my system.
As of right now I am merely trying to determine how the LV_DISP_DEF_REFR_PERIOD impacts writes to the framebuffer. It does not appear to impact how quickly the animation is flushed out to the framebuffer.
I’ve changed the timestamps to show the time diff from the previous call in milliseconds. It’s more readable for me:
404 < Blinking cursor
200 < Animation to another tab
2828 < Wait on the another tab
28 < Animation to another tab
2829 < Wait on the other tab
24 < Animation to the Write tab
404 < Cursor blink
11 < Animation to another tab
2619 < Wait on the other tab
28 < Animation to another tab
What is the size of your display buffer? (In lv_disp_buf_init())
Does this imply that my ioctl should be moved out of my flush() call and into disp_drv->monitor_cb() instead? I don’t think the demo explicitly sets this, so my guess is this is currently a NULL pointer.
@embeddedt this seems to have solved the mystery. Thank you so much! I’m finally getting a handle on how all of this works in my system.
I think I will leave everything as-is and just let my driver re-draw the full frame every refresh period. I don’t have any need for animation so a fast frame rate isn’t really an issue, and I have plenty of CPU MIPS to work with.
I’m struggling with getting a number of buttons onto the screen at once, but that’s a story for another post…
If you have enough RAM, you can make that display buffer larger (which will improve performance). There is no enforced maximum size (although making it bigger than the display has no benefit).
I think, if you really needed efficiency later on, you could either intersect or create a list of the areas passed to disp_flush and then use that information in monitor_cb to determine the damaged area. But if performance is already acceptable for you, don’t worry about it.
I’m keeping this in mind already, but I appreciate you confirming my thoughts. I’m running this on a 1GHz machine, and our application is not super CPU-intensive so I should have plenty of processing power. The optimizer in me is having a fit, but my pragmatic side is telling me to just move on.
My display buffers already match the size of my screen, so I should be all set there.