How to use ARM Mali-400 GPU to accelerate LVGL image rendering

I have several products using LVGL. It’s an excellent graphical library. My platform uses an ARM Cortex-A53 running a Linux system. The SoC has a built-in ARM Mali-400 based GPU, which supports OpenGL ES 1.1 and 2.0 as well as OpenVG 1.1. I would like to use this GPU for graphics rendering acceleration to offload some of the CPU workload. However, I don’t intend to use the GPU for display output — I have dedicated hardware DMA handling that. I only want to use the GPU for rendering acceleration to speed up LVGL.

Does the current version of LVGL support GPU-based rendering acceleration using this GPU? Could you provide some suggestions on how to implement it?

@kdschlosser @kisvegabor @embeddedt

You missunderstand LVGL , but read OpenGL ES Display/Inputs Driver - LVGL 9.4 documentation

According to the documentation:

The OpenGL ES display/input driver offers support for simulating the LVGL display and keyboard/mouse inputs in an desktop window created via GLFW.
It is an alternative to Wayland, XCB, SDL or Qt.
The main purpose for this driver is for testing/debugging the LVGL application in an OpenGL simulation window.

It only simulates desktop use. My requirement is to directly accelerate LVGL in a Linux environment. Is this currently supported? Similar to the VG‑Lite General GPU renderer.

@ Marian_M

You self reply, when you on linux why use LVGL ? Build complete X support and use it. If you ask low flash and ram forget advanced GPU in LVGL.

@william

To answer your question…

Yes it is able to be done.While Open GL is usually thought of as a way to render graphics to a display what it actually is is a way to render graphics to a buffer and then the buffer is able to get written to the display. There is a rendering driver built into LVGL to handle the rendering aspects without writing anything to a display. This is an OpenGL ES driver so if your GPU is able to be used using OpenGL ES then it should work out of the box without having to do anything to get it working other than turning on the OpenGL ES renderer in LVGL.

I can help you out with checking compatibility with the driver and getting it up and running if you like.

Could the LVGL official provide a reference design that uses OPENGL ES to accelerate rendering?

I’m running LVGL on a Linux platform because LVGL is lighter and easier to customize. But I want to use OpenGL ES to speed up rendering. So I’m looking for LVGL support.

I think you are looking for this: OpenGL ES Display/Inputs Driver - LVGL 9.4 documentation

Thank you for your reply. However, the main purpose of this driver is to test/debug LVGL applications in an OpenGL simulation window.
My goal, on the other hand, is to run OpenGL on the SoC to accelerate LVGL text rendering and free up CPU processing resources. I’m not doing simulation — I need it to run in real time. I’m looking for any reference designs provided by LVGL in this regard. Has there been any update?

If you follow the link above it will not enable the GLFW driver, but OpenGL based rendering, which actually speeds up rendering.

This is the related chapter: OpenGL ES Display/Inputs Driver - LVGL 9.4 documentation

Thank you!
I will try it.

I have read the link you provided, but it is based on GLEW + GLFW for desktop display output.
I need a headless/offscreen rendering path.

Below is my application workflow:
LVGL application generates buttons, images, widgets, etc.

LVGL calls the GPU rendering interface (OpenGL ES API)

GPU API layer (EGL + OpenGL ES, Offscreen), where EGL creates an offscreen context (PBuffer / Surfaceless)

OpenGL ES draws all LVGL elements in the GPU

DMA-BUF zero-copy (LVGL directly maps the GPU buffer)

LVGL display driver refreshes the buffer content to the screen (SPI, RGB, MIPI, HDMI, etc.).

In this process, GLEW/GLFW are not needed.

In LVGL source code, I see it includes:

#include <GL/glew.h>
#include <GLFW/glfw3.h>

Does LVGL support the application method I described?
If yes, could you please provide an example or explain the method?
@kisvegabor


As you can see in the image, even though I’ve enabled
LV_USE_DRAW_OPENGLES
the source code still calls:
#include <GL/glew.h> #include <GLFW/glfw3.h>

Is it possible to use OPENGLES without requiring glew or glfw3?
@kisvegabor

I enabled the following defines:

#define LV_USE_OPENGLES 1

#define LV_USE_DRAW_OPENGLES 1

However, I commented out the following source code call:

// #include <GLFW/glfw3.h>

Also, the following two files were not compiled into the dynamic library:

lv_glfw_window.c
lv_glfw_window.h
lv_glfw_window_private.h

After modifying CMAKE, the build succeeded. However, when copying the lvgl.so library to the device for execution, the following error was reported:

[Error] (0.685, +1) execute_drawing: Asserted at expression: target_texture != 0 lv_draw_opengles.c:53

Compared to the version without OPENGLES enabled, my LVGL program should not have been modified. As previously discussed, it internally implements OPENGLES calls for GPU rendering, requiring no user modification, correct? But in fact it seems that OPENGLES did not run.

Please help analyze this problem, thank you!
@kdschlosser @kisvegabor @Marian_M

Try ask chatgpt and will …

@william Here is the PR you’ve been waiting for: https://github.com/lvgl/lvgl/pull/8677

This abstraction has been in progress for 8 months. Please evaluate it. Right now I’m cleaning up lv_linux_drm.c. You should find that you can use it as flexibly as you describe.

The OpenGL ES display/input driver offers support for simulating the LVGL display and keyboard/mouse inputs in an desktop window created via GLFW.
It is an alternative to Wayland, XCB, SDL or Qt.
The main purpose for this driver is for testing/debugging the LVGL application in an OpenGL simulation window.

Let me remove that from the documentation. It is not true anymore, if ever it was.

Thank you for your reply.
I don’t use LV_USE_LINUX_DRM and haven’t enabled it. Can I still use OpenGL with the branch you provided? Could you provide some modification suggestions? Thank you very much!
@liamHowatt