Questions about define object and effect on memory

Hello LVGL Team, @kisvegabor, @pete-pjb , @embeddedt

I would be grateful if you could answer my questions

Question 1 :

I wanna know what is the best way to define objects, and optimized :

  • In structure : as the code generated by GUI-Guider ?
  • global variable : as the code generated by SquareLine ?
  • Local variable ? ( I assume this is not a good way because we can’t access to object by other part functions ) .

Can someone explains to me differences between the ways we define objects, and their effect ?

Question 2 : Same question applied for styles ?

Question 3 :

if i wanna make a GUI with many objects (for example : Tabview(5 tabs) inside each tab a tabview(5tabs) inside each tab Tabview(5tabs) ==> that finally we will have in total 31 Tabview and many objects (Buttons, images, switches,…) in each )

what is the effect on memory, and CPU Usage if i did it in same screen ?
Is it preferable to do it on one screen ? or define for each view a separated screen?

Thank you in advance

Cordially,

Marouane

Hi @Zebra067 ,

Assuming your application is ‘typical’ and has a fixed set of screens/tabs/interface parts and doesn’t need to change a huge amount at runtime, this would be my recommendations:

Answer 1:
If you want full control over your objects throughout the lifecycle of your application you are correct in saying your objects need to be declared globally in some way, as opposed to locally, whether that be statically in one module/file for control with in that particular module or globally for refence and control externally from different modules/files with in your project.

Whether you use a static/global variable for each item or group them into a static/global structure is entirely up to you. It is unlikely to have any effect on the quantity of memory used at the very worse it might add a few bytes if structures are used in some architectures, but as most objects are pointers which wont get padded out in a structure this is also pretty unlikely.
Personally I don’t use any GUI design programs just plot things out manually as I am quite old and I learnt programming long before WYSIWYG, in fact I learnt programming before computers could do graphics lol :slight_smile: I have created projects with both individual global objects for all components and also created my own global structures, having done both I prefer the structures approach as it can be built in the same logical order as the layout of the real GUI which I find easier to remember.

The key thing is either way your memory requirements/performance are not particularly affected by this decision so just go with what ever you like best!

Answer 2:
To date personally I haven’t really used the style system I have built only fixed GUI interfaces which are initialised once at start-up and apart from changing between a dark and a light theme or altering the primary theme colour I don’t change anything after that. So it seemed unnecessary to me to add a load of extra styles to my projects. I do however make odd changes to the local style properties of some objects see here.
If I were to use styles in the future I would again add them to my static/global structures as they normally need to remain for the lifecycle of the application.

Answer 3:
Again personally I have created multi-tabbed (tabviews within tabviews within tabviews…) GUIs with a number of status panes around them on some large screens (1440x900 & 1366x768) and my approach has always been to create and place all the objects at start-up all on one screen for simplicity(Sometimes things are hidden also and only shown when needed). I have found this to work very well in my own applications both with LVGL and other GUIs. This in my opinion, should give the best performance as you don’t have to create any of the objects during the operation of the application, albeit you will have a slightly increased boot time, which is normally quite acceptable for embedded systems which are usually orders of magnitude faster at booting than your average desktop system…

You could break the application screen down into multiple screens and create and destroy them as required but this would all take extra processing time during the execution of the application, even though it would save on the immediate amount of memory required at any point in time. In my own experience it is best, if you have enough memory, to create everything you need at start-up so there is no waiting for objects to be created for the user when they switch screens during the application if that makes sense?

Summary:
In my opinion, keep a global copy of all objects that require referencing, CPU usage should be reduced during the lifecycle of your application if you have enough memory to create all the objects at start-up at the expense of a slight increase in boot time, this should lead to the best performance for the user.

I hope that is helpful.

Kind Regards,

Pete

1 Like

Hi @pete-pjb,

Thank you for your response, and sharing your point of view you based on your experiences in programming,

For my application it will have a fixed part of objects and widget fixed, another parts of configuration that will be added by user lately would create new objects, and i ll have some widgets that would be created if some events happened.

So I imagine that i’ll declare fixed set of widgets as global structures, and the temporary parts i’ll create them in events_cb as local objects.

I noticed the more objects I have to display in one screen, the more the CPU usage increases. (I assumed that it was due to refresh objects and handle events for each.)
and I assume that tabviews make the usage of CPU more important. (Maybe it’s due to the events that are related to it and the horizontal scroll animation between pages.)
That’s why I asked if it is an optimized solution to break the screen down into multiple screen.

NB: screen used is 7" (800x480) and developpement board is IMXRT1060-EVKB

I sincerely appreciate your help .

Kind Regards

Marouane

1 Like

Hi Marouane,

I have seen this too… it will be true only if the objects are visible and are constantly being updated.

I have status screens and panels in my applications which have many data fields these are updated by LVGL timers, I found if I update all the values all the time it would cause a massive performance issue, because basically the whole screen needs to be re-rendered if we update them all. If however I keep a static copy of each displayed value on exit from the timer call-back function and compare it with the current value when the timer executes the next call I can then choose to update the fields only if they have changed. In some instances it is possible to have a set of flags to indicate an update of a field on the screen is required also depending on the application. If that makes sense? This in my cases has made a huge difference to the performance of the the applications.

The key is to think about anything which causes an un-necessary redraw of a text field/ label or widget and restructure your code to eliminate it.

Here’s a quick very simple example… where the update_screen() function would be called periodically by an lv_timer

lv_obj_t	label_sys_temp = lv_label_create(lv_scr_act());

update_screen( void ) {

	static double	last_temp = 0;
	double			current_temp;
	char			buf[80];
	
	if( (current_temp = get_sys_temp()) != last_temp ) {					// This assumes get_sys_temp() is a non blocking function which returns immediately!
		snprintf( buf, sizeof(buf), "Temp: %5.2f C", current_temp );		// Saving these CPU cycles
		lv_label_set_text( label_sys_temp, buf );							// when no update is
		last_temp = current_temp;											// required and 
	}																		// LVGL will not have to re-render label_sys_temp
}

NOTE: I just remembered I have used this invaluable option in lv_conf.h many times

/*1: Draw random colored rectangles over the redrawn areas*/
#define LV_USE_REFR_DEBUG 1

to help identify where the drawing time is being spent while developing my application, it slows everything down quite a lot but it really highlights which fields/widgets are being redrawn the most which helps decide how to best optimise the code.

Do you use any sort of RTOS or scheduler with LVGL in your applications? This in my experience makes it easier to structure the code such that the performance can be enhanced also making it possible to keep the GUI responsive under all circumstances with careful design of the processes and priority of execution.
Also in general it is very important (using a scheduler/RTOS or not) not to call functions which take long periods to execute from with in the execution path of LVGL (ie from timers or event call-backs) as it will cause noticeable degradation to the performance of the GUI. The example I have given above could also demonstrate this as the function get_sys_temp() might in some systems go read some very slow hardware which takes a few seconds and would hold up the screen for a while which can be frustrating to the user. Whoever if there is an RTOS or scheduler running there could be a low priority supervisor process running which speaks to the hardware periodically and updates a global variable containing the temperature and all the get_sys_temp() function does is return the current global variable… Making for an instant response in the GUI.

I hope that all makes sense, if you are aware of all of this info please just ignore :slight_smile:

Kind Regards,

Pete

1 Like

Hi Pete,

For real it’s my first experience in GUI, so i didn’t really knows how it works,

I didn’t really program the code from the beginning, I took a demo application of lvgl widgets for the IMXRT1060-EVKB board, and there was alrealy a file named (lvgl_support.c & .h) where the initialisation of LVGL is done (size of screen, display_init, indev_init…)
lvgl_support.c (14.5 KB)
lvgl_support.h (1.1 KB)
lv_conf.h (18.6 KB)

if you could take a look to it,

For now i use just one task in my main.c file, where i use it this way as presented

static void AppTask(void *param)
{

    lv_init(); //init of LVGL
    lv_port_disp_init();     // init of display (defined in lvgl_support.c)
    lv_port_indev_init();   // init of input device (defined in lvgl_support.c)

    tabvtest(&iydan);       //My application (iydan is my object structure)
 
    for (;;)
    {
        lv_task_handler();
        vTaskDelay(10);
    }
}

I thought at first that is lvgl works that way, and don’t redraw everything, after i tried this i see that all widgets are redrawn all time (even the transparents parts)

Your view looks very useful and that would make a huge difference to performance as u said, But I don’t see where this function can be used, could you precise where i should do modification to have a better performance ?

Yes i use FreeRTOS, I agree with those information you gave, and in my application there will be many tasks for MCU to do in parallel with the display, so i’m aware about the advantage of using an RTOS to ensure the proper functioning of tasks and make us keep control priorities and protect data by Semaphores …

Thank you very much for your help, I really appreciate,

best regards,

Marouane

Hi Marouane,

Can you post a link to the demo you are using and I will take a look at how it is structured and see if I can offer you some advice on how to improve the performance.

Kind Regards,

Pete

1 Like

Hi Pete,

I can’t find a link to the the DEMO i use, ( it’s in SDK file that works in MCUXpresso IDE )
i already uploaded files in my response before, and i can’t upload all the project here,

here are demo widgets files

  • folder ==> board there are files (lvgl_support.c and lvgl_support.h) they are used to initialise display and input devices
  • lv_examples : there are the demo widget and (my old example training :face_with_hand_over_mouth: lol)
  • Source : lvgl_demo_widgets.c (is the main ) & lv_conf.h
  • touchpanel : drivers of touchpad

i assume that modifications will be in lvgl_support.c file

NB : LVGL version used is 8.0.2

I appreciate your help, thank you so much

kind regards,

Marouane

Hi Marouane,

I will download this and take a look at their implementation.

I have saved your uploaded files also so I can have a look at your code too.

This may take me a while so it might be a number of days before I can come back with a good answer!

Please bare with me. :grinning:

Kind Regards,

Pete

1 Like

Hi Pete,

Thank you so much,

Awaiting your advices, please accept my best regards.

Marouane

Hi Marouane,

I have now installed the NXP tools on my development platform and had a good look at how they have implemented LVGL with FreeRTOS and all looks okay :slight_smile: Unfortunately I don’t have any IMX hardware to play with so I can’t run an IMX project here.

However I have built and run your application using the windows/eclipse simulator and it appears to be working well there.

With:

/*1: Draw random colored rectangles over the redrawn areas*/
#define LV_USE_REFR_DEBUG 1

enabled it is only updating the flashing cursor when things are idle which is just a tiny square this uses very little CPU time. If you move the tabviews you get a number of screen buffer size rectangles drawn whilst the animation is taking place and then it goes back to just refreshing the cursor. All is as expected and you shouldn’t be having any CPU load issues etc.

Note you will see quite a number of coloured rectangles on the screen with LV_USE_REFR_DEBUG 1 enabled but if they are not constantly changing colour then they are not being updated. It is just showing the way the screen is rendered, it changes the rectangle colour on each render pass. The only rectangle that should be changing colour when the screen is idle is the one around the flashing cursor. I hope this makes sense…

If this is what you are seeing all is well and your approach so far is fine.

If however there are many rectangles changing colour all the time, something is likely wrong somewhere and we need to investigate further.

Let me know your thoughts.

Kind Regards,

Pete

1 Like

Hi Pete,

Your tests were so fast, Thank you so much

In my application (tabvtest) I tested it with hardware IMX or with codeblocks simulator, and it works good the reclangles are not recolored since the i don’t touch in numpad
but if i touch a numpad (any button except OKAY and Backspace) or textarea, i see the cursor moving so many object on screen is changing colors,
I tried this even with the demo widgets,
I find out that is due to the state focused of textarea

lv_obj_add_state(text_pav_num, LV_STATE_FOCUSED);

is this normal that many objects are re-rendered when textarea is focused ?
(note : i’m working with LVGL 8.0.2 in both -simulator and hardware-)

question off topic : May i ask how to use simulator in eclipse (windows) ? I use CodeBlocks and it’s very basic IDE , I followed the tutorial link and i didn’t succed, and i saw that eclipse is more recommended for Linux and Mac, i didn’t continue trying.

In my hardware the CPU Load is about 30% and there’s not much things in my application, do you have some advices to get better performances ?

Thank you so much Pete,

Best Regards

Marouane

Hi Marouane,

Is it possible for you to post a video so I can judge better what you are seeing please?

The eclipse simulator running in windows I set up along time ago so don’t really remember that well. Let me think about it and get back to you if I can remember! :slight_smile:

Kind Regards,

Pete

1 Like

Hi Marouane,

I have now rolled back my simulator and tested your application with LVGL version 8.0 and there does appear to be some level of unexplained CPU utilisation.

Before we do anything else I think it would be prudent to update everything to the latest available NXP versions. I would download the latest MCUXpresso IDE and log in to your NXP account and create a new SDK download and import it a fresh on your development system, this should give you the latest NXP LVGL and FreeRTOS ports, then add your code back to the new installation and re-test to see if it fixes the problems.

What do you think?

Cheers,

Pete

1 Like

Hi pete,

here a video using LV_USE_REFR_DEBUG 1, and as u see when à text area is focused there are many objects re-rendered
app_tabvtest_refresh_deb (1).zip (2.4 MB)

and here a video about my CPU usage, (again when textarea is focused the cpu usage increases)
app_vtest_cpu_load (1).zip (2.3 MB)

It’s okay for the eclipse simulator, i’ll try to work on it later, thank you

oh so there’s something going slow in the version 8.0 :frowning:

ah i see that there is a new SDK MCUXpresso and the LVGL version used is 8.2, So i’ll try to update MCUXpresso and download the new SDK and adapt it with my screen, and come back with better performances i hope :smiley:

Thank you, I really appreciate your help

kind regards,

Marouane

1 Like

No problem you’re welcome Marouane,

Let me know how it goes :slight_smile:

Cheers,

Pete

1 Like

Hi pete,

I did update to the latest available NXP version of MCUXpresso 11.6 and latest SDK available for my board ( containing : LVGL 8.2), and i can’t see differences in performances :frowning: , for my application i get same CPU load around 30%

here I use the widgets_demo : I noticed that in case i disable the Pixel Processing Pipeline (PXP) in lv_conf.h file the CPU load is better .

/* 1: Use PXP for CPU off-load on NXP platforms */
#define LV_USE_GPU_NXP_PXP 0
  • Using PXP : 10FPS & CPU load 43%
  • Without PXP : 14FPS & CPU load : 31%

here videos that shows two case on my screen : demo_widgets_cpu load.zip (1.7 MB)

Normally PXP is used to have better performances and less memory footprint, maybe the configuration is not good,

Do you have any advices ?

for the quote below in LVGL 8.2 still many objects are re-rendered when a textarea is in focused state

Thank you for you help Pete,

Kind regards,

Marouane

Hi Marouane,

Thank you for all the info, looking through everything, this issue appears to be similar to this one.

So I believe the NXP drivers for the IMXRT1060-EVKB board need to be updated to work in a different way for Version 8 onwards… They currently seem to be configured with direct mode disabled, full refresh using full screen double buffers. I would suggest they should run with full refresh disabled, direct mode enabled, full screen double buffered.

I have posted my code which I used to fix this issue on my own hardware platform after migrating from version 7 to version 8 of LVGL on the linked post but there hasn’t been any response from @gianlucacornacchia to mention if it was useful or not.

Here is an example of a potentially modified lvgl_support.c (15.4 KB) from the latest MCUXpresso IDE v11.6.0_8187.(Please bare in mind I can’t test this as I have no hardware so hopefully this will just help you get on the right track to solve the issue!) The required changes are hopefully as follows:

void lv_port_disp_init(void)
{
    static lv_disp_draw_buf_t disp_buf;

    lv_disp_draw_buf_init(&disp_buf, s_frameBuffer[0], s_frameBuffer[1], LCD_WIDTH * LCD_HEIGHT);

    /*-------------------------
     * Initialize your display
     * -----------------------*/
    DEMO_InitLcd();

    /*-----------------------------------
     * Register the display in LittlevGL
     *----------------------------------*/

    lv_disp_drv_init(&disp_drv); /*Basic initialization*/

    /*Set up the functions to access to your display*/

    /*Set the resolution of the display*/
    disp_drv.hor_res = LCD_WIDTH;
    disp_drv.ver_res = LCD_HEIGHT;

    /*Used to copy the buffer's content to the display*/
    disp_drv.flush_cb = DEMO_FlushDisplay;

    disp_drv.wait_cb = DEMO_WaitFlush;

#if LV_USE_GPU_NXP_PXP
    disp_drv.clean_dcache_cb = DEMO_CleanInvalidateCache;
#endif

    /*Set a display buffer*/
    disp_drv.draw_buf = &disp_buf;

    /* Partial refresh */
    disp_drv.full_refresh = 0;		/* CHANGE 1 */
	disp_drv.direct_mode = 1;		/* CHANGE 2 */

    /*Finally register the driver*/
    lv_disp_drv_register(&disp_drv);
}

You will need a new function to update the second full screen buffer when the buffers are being flipped by the driver something like this:

/*CHANGE 3*/
static void DEMO_UpdateDualBufffer( lv_disp_drv_t *disp_drv, const lv_area_t *area, lv_color_t *colour_p ) 
{

	lv_disp_t*	disp = _lv_refr_get_disp_refreshing();
	lv_coord_t	y, hres = lv_disp_get_hor_res(disp);
    uint16_t	a;
    lv_color_t	*buf_cpy;

    if( colour_p == disp_drv->draw_buf->buf1)
        buf_cpy = disp_drv->draw_buf->buf2;
    else
        buf_cpy = disp_drv->draw_buf->buf1;

    for(a = 0; a < disp->inv_p; a++) {
    	if(disp->inv_area_joined[a]) continue;  /* Only copy areas which aren't part of another area */
        lv_coord_t w = lv_area_get_width(&disp->inv_areas[a]);
        for(y = disp->inv_areas[a].y1; y <= disp->inv_areas[a].y2 && y < disp_drv->ver_res; y++) {
            memcpy(buf_cpy+(y * hres + disp->inv_areas[a].x1), colour_p+(y * hres + disp->inv_areas[a].x1), w * sizeof(lv_color_t));
        }
    }
}

The flush function will need to check and decide when to update the second buffer maybe something like this:

static void DEMO_FlushDisplay(lv_disp_drv_t *disp_drv, const lv_area_t *area, lv_color_t *color_p)
{
#if defined(SDK_OS_FREE_RTOS)
    /*
     * Before new frame flushing, clear previous frame flush done status.
     */
    (void)xSemaphoreTake(s_frameSema, 0);
#endif
/*CHANGE 4*/
	if( lv_disp_flush_is_last( disp_drv ) ) {
		DCACHE_CleanInvalidateByRange((uint32_t)color_p, DEMO_FB_SIZE);
		ELCDIF_SetNextBufferAddr(LCDIF, (uint32_t)color_p);
		DEMO_UpdateDualBuffer(disp_drv, area, color_p);
	    s_framePending = true;
	}

}

I have no idea if this code will work but please give it a try hopefully if it doesn’t this is enough of a head start for you to be able to debug it on some real hardware :slight_smile:
A further enhancement once this is working would to restructure the code so LVGL doesn’t ever get blocked by the semaphore, currently I don’t know how often or how long the code blocks on the semaphore in reality as I have no hardware to test, I also don’t know enough about the NXP hardware to speculate at this time the best way to achieve this. An early iteration of my driver also used a semaphore and it proved to introduce a delay in the execution path and ultimately I managed to engineer the use of the semaphore out using a simple flag and not blocking the execution.

Let me know how you get on.

Kind Regards,

Pete

1 Like

Hi @Zebra067 ,

I have been thinking on this further and I believe you might need to do this with flush function to avoid a potential dead-lock in the driver (see the added else clause):

static void DEMO_FlushDisplay(lv_disp_drv_t *disp_drv, const lv_area_t *area, lv_color_t *color_p)
{
#if defined(SDK_OS_FREE_RTOS)
    /*
     * Before new frame flushing, clear previous frame flush done status.
     */
    (void)xSemaphoreTake(s_frameSema, 0);
#endif
/*CHANGE 4*/
	if( lv_disp_flush_is_last( disp_drv ) ) {
		DCACHE_CleanInvalidateByRange((uint32_t)color_p, DEMO_FB_SIZE);
		ELCDIF_SetNextBufferAddr(LCDIF, (uint32_t)color_p);
		DEMO_UpdateDualBuffer(disp_drv, area, color_p);
	    s_framePending = true;
	} else lv_disp_flush_ready( disp_drv );

} 

Cheers,

Pete

1 Like

Hi Pete,

Thank you so much for your support, I have much better performances (33 FPS, 6% CPU load) than the last configuration, when it’s display something stable without animation or changing parts in screen,

in the version of LVGL (8.0) i used in last MCUXpresso SDK there were no option to direct_mode ( it wasn’t defined in structure _lv_disp_drv_t ),

Thank you, The file is not working but it’s very close helps me so much, that file lvgl_support.c is configured with the old version,
here is the original new version of lvgl_support.c (14.1 KB) from the latest MCUXpresso IDE v11.6.0_8187. :

  • there is no function DEMO_WaitFLush.
  • the implementation of function DEMO_FlushDisplay

this function i made it like this:

static void DEMO_WaitFlush(lv_disp_drv_t *disp_drv)
{
	#if defined(SDK_OS_FREE_RTOS)


	    if (xSemaphoreTake(s_frameSema, portMAX_DELAY) == pdTRUE)
	    {
	        /* IMPORTANT!!!
	         * Inform the graphics library that you are ready with the flushing*/
	        lv_disp_flush_ready(disp_drv);
	    }
	    else
	    {
	        PRINTF("Display flush failed\r\n");
	        assert(0);
	    }
	#else
	    while (s_framePending)
	    {
	    }

	    /* IMPORTANT!!!
	     * Inform the graphics library that you are ready with the flushing*/
	    lv_disp_flush_ready(disp_drv);
	#endif
}

here is the file modified with what you suggested : lvgl_support.c (16.4 KB), it’s working and have a good performances, but after a while (around 10min) of displaying the display is blocked and touchpad doesn’t work, (mcu block somewhere, and need to reset)
can you check if it’s okay in the configuration ?

I’m very grateful for your help Pete,

Thank you so much,

Best regards,

Marouane

Hi Marouane ( @Zebra067 ),

I am glad you have seen a good improvement in performance :grinning: not good you are experiencing a crash :thinking: It might be worth enabling asserts and logging for LVGL to help track down that issue?
Also I don’t totally understand your response with regard to versions of files etc. and maybe this could contribute to the crashing also. Here is the SDK I generated on the NXP website a couple of days ago for the newly installed MCUXpresso latest IDE I downloaded. Can you confirm this is the correct version for your hardware. I have imported the projects from this SDK and I have used the project named evkbmimxrt1060_lvgl_demo_widgets

If you can post the link from NXP website to share your downloaded SDK I can also check this also if that is helpful?

Kind Regards,

Pete