We’re developing on a Quad-core Cortex-A35 and Mali-G31 GPU platform using LVGL v9.3. However, the resolution reaches 720p, sw rendering frame rate is low. I’d like to know how much OpenGLES rendering performance improves compared to CPU rendering by enabling LV_USE_DRAW_OPENGLES in LVGL.
From the code, it appears that OpenGLES only handles non-rounded corners and transition colors in LV_DRAW_TASK_TYPE_FILL, as well as LV_DRAW_TASK_TYPE_LAYER. Overall, it handles relatively few rendering steps. What are the future plans for OpenGLES rendering?
Hello, I’ve been doing a lot of work recently on everything about OpenGLES rendering in LVGL, and this will be reflected in the latest release soon. We have considerably improved performance. Here’s what you can expect out of LVGL running OpenGLES with the DRM + EGL output driver.
My testing was done on a Raspberry Pi 3B running a 800x480 HDMI display. I modified my LVGL configuration to max out at 500fps or whatever the GPU can deliver, whichever is less.
Using the DRM driver with EGL, and LV_USE_DRAW_OPENGLES = 1
Average FPS: 389
and using the same driver but with LV_USE_DRAW_OPENGLES = 0
Average FPS: 309
If you were to use the default LVGL 30 fps timing settings, all benchmark tests will run at 30 fps solidly using between 4% and 6% CPU.
Raising the frame rate to 60 does not significantly change those CPU usage numbers, and it will also run at a solid 60fps.
Your performance figures will likely run a bit lower since 720p is a larger display to render, but they will be comparable and well over a 100 for both, probably closer to 200fps.
So you can expect drastically improved performance out of the OpenGLES rendering path in the latest version of LVGL, but make sure you’re using the new DRM + EGL fullscreen driver, as that’s the one I’ve optimized so far.
Before updating, i would suggest waiting another few days, maybe a week, there are a few finishing touches being applied to the code now to make it easier to drop in with existing builds.
Here is a sample I captured directly from the HDMI output of the RPi3B using a capture card several days ago. This specific test was done at 720x480 and the code was optimized a little bit more after this clip, too (notably, widgets demo speed improved further, it’s still having some slight issues in this clip)
Hi, is the OpenGL ES rendering optimization complete? Can I verify it by using the master branch of GitHub - lvgl/lv_port_linux: LVGL configured to work with a standard Linux framebuffer? Only LV_USE_LINUX_DRM/LV_LINUX_DRM_USE_EGL/LV_USE_OPENGLES/LV_OPENGLES_API_EGL/LV_USE_DRAW_OPENGLES is configured. Doesn’t it require a windowing environment like GLFW/WAYLAND?
Hi, also trying hard to run lvgl app on my STM32MP2 board with Opengles es rendering but failed continuosly. lvgl v9.4 with default configration wayland, sdl2 and DRM GBM work perfectly. but with Opengles never work.
can you please share your code and example?
by default wayland and sdl is cpu based rendering. i also would be happy if possible to run lvgl in STM32MP2 board with gpu rendering. i follow every official docs.
from looking at the opengles code in LVGL it looks like the only thing that opengles is being used for is writing the frame buffer data to the display. It is not being used to draw/render any of the primitives. You will not get any real performance gain by doing this. Opengles is basically being used as a kind of image viewer, that is pretty much all that is being done with it. to get any benefits when using opengles or any other kind of hardware graphics acceleration is going to come when using it to actually render the UI and not just using it to display a frame buffer. It it supposed to handle all of the math and iteration needed to set each individual pixel. with openglels it is specifically designed to handle 3D and as such it deals in polygons or sets of points that make up an area. That area can be filled with a color or it can simply be lines between those points or it can be both. you can use it for 2D by always passing 0 to the Z axis and making sire the view is set so there are no perspective changes being made to the pixel data and so the Y axis rotation is 0 and setting the camera position so it is able to “see” the whole of the UI. You would need to pass an array of either triangles or quads that are the points needed to instruct opengles what areas to fill with color.
With that being said I do not believe that this can properly be acomplished with LVGL because of how LVGL is written. You would need to ability to pick and choose what “drawing” functions you want to be software and which ones are going to be supplied by the rendering driver. Not all hardware acceleration is made the same and they may or may not be able to perform all of the things that are needed to render an LVGL UI. It is also highly possible that using hardware to render something that LVGL does is slower than it is in LVGL. Using opengles as an example. rendering the corners of a rounded rectangle using opengles would probably be slower because of having to calculate a lot of triangles to make the corner and all of the triangles cannot overlap and they need to share their points with adjacent triangles. In these cases it would more than likely be faster to place each pixel on a buffer. where opengles is going to really shine is by caching the arrays of points for the different bits and pieces so when areas need to be redrawn you don’t need to perform the calculations again. opengles also perform really well when rotating the view in 3d space. All the math used to transform the points is done using hardware. this is not something that would be beneficial to LVGL because LVGL is a 2D graphics framework.
So to sum up. You would need to redesign the current rendering/drawing system so that when a driver is written for a hardware accelerator the functions used for drawing/rendering the different primitives (rectangle, triangle, circle, line, arc, etc…) are able to be pointed to functions that are written in the driver so LVGL will use the hardware driver for the different rendering/drawing operations. This needs to be done in a manner that defaults to software based and when hardware acceleration is to be used then the rendering/drawing functions that are able to be supported by hardware are able to be supplied by the driver and the rendering/drawing functions that are not able to be supported by the driver are able to still be done using software.