Lvgl to slow to be useable

As the problem forum appears to be deprecated I hope this is the place to ask for help on issues.

I am using an i.MX RT processor running at 500MHz, with 2D GPU support, whenever I have a fairly complicated static screen my lvgl call rate ( and hence update rate ) is 1ms.
However, whenever there is something happening on the screen, be it animation, updating a single button, or even having the performance monitor updating in the corner my CPU usage goes up to 99% and my update rate is 3 frames per second with lvgl call rate nearing 300ms. This is unusable for me, and the disparity is so large I figured I am doing something wrong, especially as this is my first project with lvgl.
I create the screen with the gui-guider and run the screen on FreeRTOS, with the “standard” setup of the GPU and LCD. It is a custom LCD with 800 x 1280 pixels. My theory is that I have a cache or similar too small as it only happens with a complicated screen ( 15 buttons and a few other widget types ).
Is there anything obvious I should be doing? I have double buffered the screen ( in fact triple buffer as it is a rotated screen ). What frame rate are others experiencing with similar setups.
As it is a custom screen is there anything I could be doing wrong to cause this?
I can provide the code if it helps, but I don’t think I am doing anything extraordinary.

Any help appreciated.

Many Thanks

Seems your idea use big screen resolution lacks knowledge about hw.
Triple buffering 16bit RGB require 6MB RAM , have you this? You rotate by sw ???
I dont write next numbers as required memory speed usw.
LVGL is memory based GUI, optimal flush method is your job.
Try read somethink about, or use prepared code.


I have 32MB available and the rotation is in hardware, as I said, I use the 2d GPU present on the silicon.
I am using as much of the prepared code as possible, using the defines for the NXP as present in the NXP SDK code.
I am using the NXP SDK code so the flush out of that.
I am happy to read anything presented to me but I do not know where to look in order to find the thing that is slowing everything down. Writing a single number to a button or label slows things from 100fps to 3fps, this cannot be expected surely?
Sorry for my ignorance but I have spent a long time experimenting with the various options in the config file with no improvement.

Many Thanks

NXP — LVGL documentation
hope you use steps from here.

1 Like


Thank you for your continued patience.
I have now tried on a standard NXP EVK with the NXP SDK running both LVGL 8.2 and 8.3.
It has full hardware GPU support, both the PXP and VGLite running, with the flush conducted by the hardware BLIT in the PXP. I am using the prewritten software with a gui-guider screen, so all the set-up is as expected by NXP.
With a few widgets on screen the update rate is very high ( faster than a 1ms tick ). But I am testing by updating the contents of a single label, with 20 widgets on screen and the update rate is around 300ms. Is there a memory area or similar I can increase in size that will help?
What is causing the slowdown as this is a 500MHz processor with 2 x 2d GPUs and plenty of memory that uses 100% of processor time to update the text on a single label?
The flush is hardware assisted, and is the same flush as for the small number of widgets, and surely by updating the single widget only this widget should be marked as dirty and therefore updated?
I currently cannot use any animation features and have to manage updates very carefully in order to get any performance at all.
The disparity is so bad I must be doing something stupid, but I cannot find what as I am following the NXP guidelines.

Many Thanks

I can tell you the problem is not going to be in the LVGL code and it has to be in the user code. LVGL is able to update really fast, you might have to tweak the refresh timer possibly. It is defaulted to 33 milliseconds.

This is a speed test I just did a day or 2 ago. It’s on a desktop PC but it is using a single core of the processor. I did this test only to get a benchmark of the internal rendering and flushing of the display buffer to the screen.

There is a lot being updated in each frame causing each arc to need to be redrawn for each frame. still able to get to 180 FPS which is really good. Have you tried remove the use of RTOS? From my understanding RTOS complicates things exponentially. I am not saying to remove it permanently. Only to isolate where the problem can be coming from. Remove parts of your program that are not essential to having LVGL run.

Hi, as Kevin said, it is not due to LVGL being too slow, I managed 30FPS on a 480x800 screen with a 120Mhz CPU, without GPU support. Altough it has to be said that the screen resolution greatly impacts performance.

Anyway, are you just using everything supplied by NXP? Considering they are a partner of LVGL, it should all work properly. Perhaps try contacting them? I dare not doubt their ability to write flush() functions but could you share the display driver setup and flush function? I feel like the issue might be that LVGL is doing a full screen refresh, which is very demanding on a screen with such a large resolution.

1 Like

The issue is so stark I think it is probably me or something in the setup that is wrong, I just don’t know where to look. I have tried changing the refresh rate to no avail

I am using all NXP stuff, using their SDK example, then just adding in a screen made from gui-guider. I attach the flush function and display driver set-up.

Thank you for any help, I will follow up with NXP as well.

Many Thanks

static void DEMO_FlushDisplay(lv_disp_drv_t *disp_drv, const lv_area_t *area, lv_color_t *color_p)

 * Work flow:
 * 1. Wait for the available inactive frame buffer to draw.
 * 2. Draw the ratated frame to inactive buffer.
 * 3. Pass inactive to LCD controller to show.

static bool firstFlush = true;

/* Only wait for the first time. */
if (firstFlush)
    firstFlush = false;
    /* Wait frame buffer. */


/* Copy buffer. */
void *inactiveFrameBuffer = s_inactiveFrameBuffer;

#if LV_USE_GPU_NXP_PXP /* Use PXP to rotate the panel. */
lv_area_t dest_area = {
.x1 = 0,
.y1 = 0,

lv_gpu_nxp_pxp_blit(((lv_color_t *)inactiveFrameBuffer), &dest_area, DEMO_BUFFER_WIDTH, color_p, area, LV_OPA_COVER,

#else /* Use CPU to rotate the panel. */
for (uint32_t y = 0; y < LVGL_BUFFER_HEIGHT; y++)
for (uint32_t x = 0; x < LVGL_BUFFER_WIDTH; x++)
((lv_color_t *)inactiveFrameBuffer)[(DEMO_BUFFER_HEIGHT - x) * DEMO_BUFFER_WIDTH + y] =
color_p[y * LVGL_BUFFER_WIDTH + x];


g_dc.ops->setFrameBuffer(&g_dc, 0, inactiveFrameBuffer);

 * Inform the graphics library that you are ready with the flushing*/

#else /* DEMO_USE_ROTATE */

g_dc.ops->setFrameBuffer(&g_dc, 0, (void *)color_p);


 * Inform the graphics library that you are ready with the flushing*/

#endif /* DEMO_USE_ROTATE */

void lv_port_disp_init(void)
static lv_disp_draw_buf_t disp_buf;

memset(s_frameBuffer, 0, sizeof(s_frameBuffer));

memset(s_lvglBuffer, 0, sizeof(s_lvglBuffer));
lv_disp_draw_buf_init(&disp_buf, s_lvglBuffer[0], NULL, DEMO_BUFFER_WIDTH * DEMO_BUFFER_HEIGHT);
lv_disp_draw_buf_init(&disp_buf, s_frameBuffer[0], s_frameBuffer[1], DEMO_BUFFER_WIDTH * DEMO_BUFFER_HEIGHT);

status_t status;
dc_fb_info_t fbInfo;

/* Initialize GPU. */

 * Initialize your display
 * -----------------------*/

status = g_dc.ops->init(&g_dc);
if (kStatus_Success != status)

g_dc.ops->getLayerDefaultConfig(&g_dc, 0, &fbInfo);
fbInfo.pixelFormat = DEMO_BUFFER_PIXEL_FORMAT;
fbInfo.width       = DEMO_BUFFER_WIDTH;
fbInfo.height      = DEMO_BUFFER_HEIGHT;
fbInfo.startX      = DEMO_BUFFER_START_X;
fbInfo.startY      = DEMO_BUFFER_START_Y;
fbInfo.strideBytes = DEMO_BUFFER_STRIDE_BYTE;
g_dc.ops->setLayerConfig(&g_dc, 0, &fbInfo);

g_dc.ops->setCallback(&g_dc, 0, DEMO_BufferSwitchOffCallback, NULL);

#if defined(SDK_OS_FREE_RTOS)
s_transferDone = xSemaphoreCreateBinary();
if (NULL == s_transferDone)
PRINTF(“Frame semaphore create failed\r\n”);
s_transferDone = false;

/* s_frameBuffer[1] is first shown in the panel, s_frameBuffer[0] is inactive. */
s_inactiveFrameBuffer = (void *)s_frameBuffer[0];

/* lvgl starts render in frame buffer 0, so show frame buffer 1 first. */
g_dc.ops->setFrameBuffer(&g_dc, 0, (void *)s_frameBuffer[1]);

/* Wait for frame buffer sent to display controller video memory. */
if ((g_dc.ops->getProperty(&g_dc) & kDC_FB_ReserveFrameBuffer) == 0)

g_dc.ops->enableLayer(&g_dc, 0);

 * Register the display in LittlevGL

static lv_disp_drv_t disp_drv; /*Descriptor of a display driver*/
lv_disp_drv_init(&disp_drv);   /*Basic initialization*/

/*Set up the functions to access to your display*/

/*Set the resolution of the display*/
disp_drv.hor_res = LVGL_BUFFER_WIDTH;
disp_drv.ver_res = LVGL_BUFFER_HEIGHT;

/*Used to copy the buffer's content to the display*/
disp_drv.flush_cb = DEMO_FlushDisplay;

disp_drv.clean_dcache_cb = DEMO_CleanInvalidateCache;

/*Set a display buffer*/
disp_drv.draw_buf = &disp_buf;

/* Partial refresh */
disp_drv.full_refresh = 1;

/*Finally register the driver*/

if (vg_lite_init(64, 64) != VG_LITE_SUCCESS)
PRINTF(“VGLite init error. STOP.”);

Hello again,
as I suspected, it seems full_refresh is enabled.
The option disp_drv.full_refresh in your setup should be set to 0. Using full refresh makes the entire screen redraw itself every time only a small part of the screen changes. A button is a small dirty area but the entire screen with multiple objects is not.

Try changing it to 0 and see what happens.


Thanks for the help, it is a tale of 2 halves.

Changing to partial refresh gets the speed back, but unfortunately it corrupts the screen so it cannot be read.

As full refresh is the default for the NXP SDK I am wondering if the GPU requires the full refresh to function correctly.

I have contacted NXP so will hopefully get the answer. In the meantime thank you very much for your help, it was something simple, and I am hopefully closer to a working system.

Many Thanks

I have been in extensive discussion with NXP and will report back when progress is made just in case this discussion appears on a search sometime in the future

OK need to know some specifics on the screen.

What type of bus are you using (SPI, I8080, I2C or RGB)?
If the display is RGB or I8080 how many data lanes are you using?
Model of the display driver IC?
What is the resolution of the display?
What is the color depth of the display (24 bit, 16 bit or 8 bit)?
Does the display have GRAM available?

I need to have all of those questions answered in order to sort the problem out. If the display is using an RGB bus it more than likely has no GRAM which is the reason for having to do a full refresh. This is a challenging type of bus connection to deal with and it is going to run slower in most cases due to the MCU also needing to do other things and not just writing the frame buffer data. With a display that has GRAM the display itself is what handles writing data to the screen. They typically have an internal fame buffer that allows this to happen. All an MCU does is updates the in memory frame buffer on the display. It is able to do this as a whole but it is also able to do it in much smaller pieces as well so only areas that have changed would need to be written. This is a much better design for MCU’s due to the small amount of resources that are available.

Hello spot,

Did you find a solution to this problem?

I’m also in the same point, in full refresh = 0 I see alternating both buffers and one has the previous image, but FPS increases substancially. I’m using a MIPI to LVDS bridge, so I don’t know for sure if I can do partial refresh or not.