What you describe is what 90% of the embedded platforms use (e.g. IMX, Zynq). The GPU renders memory-to-memory and isn’t really involved in moving pixels to the panel.
From your description, that looks suspiciously like a Xilinx (now AMD) MPSoC platform. Is that correct?
Anyway, the right thing to do here is to create a small Linux driver that implements modesetting for the DMA controller. Apart from the DRI glue, the driver just programs the DMA address and that’s about it. Then everything falls into place. On the MPSoC, the MALI400 will then directly render into your DMA buffer. My advice is to use the open source “LIMA” drivers and don’t mess with libmali. The open source drivers support EGL and OpenGL, Combined, you can play gzdoom on the MPSoC using a custom-built FPGA logic to output to an LVDS panel or HDMI plug.
@william I just finished cleaning up lv_linux_drm.c in the PR. Luckily you do not need to use DRM to use EGL. lv_linux_drm.c is a working reference application of EGL. You can see how lv_linux_drm.c applies EGL. You can use the LVGL EGL API to apply it to your custom scenario. At least that’s what I am going for with this structure. Please let me know if it has any limitations for your scenario at all. Thanks.
You understood it correctly. I am using an MPSoC, and I need to use the GPU for accelerating rendering. However, I cannot use DRM because my display is not standard. Instead, I need to use the FPGA for output, and I have implemented a dedicated DMA to transfer LVGL’s rendered output to the screen. The FPGA handles ping-pong buffering and minimizes software copies as much as possible.
Now, I want to further reduce CPU usage by using the GPU for rendering (but without using the GPU for DRM output). That’s why I’m looking into an OpenGL ES solution. Based on your suggestion, instead of using libmali, I should use LIMA. But I am not familiar with LIMA. Compared with libmali, what kind of performance improvements does LIMA bring? Do you know if there are any open-source reference implementations of LIMA on MPSoC? Could you please explain this in more detail? Thanks!
My requirement is:
LVGL widgets → rendered via OpenGL ES → return the rendered memory pointer (compatible with the LVGL function:lv_display_set_flush_cb) → software then writes the data into another memory region.
Actually, the more ideal pipeline would be:
LVGL widget tree → OpenGL ES (LIMA/libmali) → GPU framebuffer → FPGA DMA → Display.
Another thing I am not sure about: when using OpenGL ES with LVGL, can the rendered widget structure output be visible to the user? Or is it fully integrated inside LVGL, so that the user still just uses the function:
Thank you very much for your update. I still have some confusion about using your branch of the code. Would it be possible for you to provide a reference example of rendering with OpenGL ES, so that I can better understand the workflow:
LVGL widgets → OpenGL ES rendering → return the rendered result → output back into LVGL(Do not use OPENGL to execute lv_display_set_flush_cb(disp, flush_cb))
Stop thinking “I cannot use DRM”. “Sending rendered output to a screen” is exactly what DRM is for. It doesn’t matter how that’s accomplished. I have a similar setup, my output goes to 4 different HDMI monitors over SERDES, hence using 4 DMA controllers. DRM works fine for me. If you have trained leprechauns to arrange colored berries on a sheet, then DRM is still the right interface.
Performance wise LIMA is usually identical to libmali. Because it’s standard and open source, it supports more standards, and doesn’t require patches all over the place. To use it, just tick the LIMA driver box in the kernel configuration and compile mesa with support for lima. A reference implementation you can find on our github (GitHub - topic-embedded-products/meta-topic: OpenEmbedded/Yocto layer for topic products. Contains BSP for Miami boards and Florida carriers.), where we enable it for the displayport and a few FPGA based solutions. Just look at the commits last month, many of those were about getting the MALI400 to work properly.
Thanks for your reply. Could you share a reference design?
It’s a good idea to combine LIMA with DMA. Using LIMA + OPENGLES and integrating it with LVGL, wouldn’t the LVGL display callback function:
lv_display_set_flush_cb(disp, flush_cb);
simply specify the physical address of the DMA display? I’d appreciate a simple reference design you could share. @milo
OpenGL texture data is not in the CPU’s address space so you cannot read the pixel data in a flush callback. You must use glGetTexImage - OpenGL 4 Reference Pages to read the texture pixel data into a buffer and then you can use it. This is typically used for screenshots or debugging. This is not a recommended way to use OpenGL for sending data to a display. You may get higher performance by using the pure software renderer in LVGL than by using the OpenGL driver this way.
If you wish to do it this way, though, your flush callback could look something like this.
Hi william,
i have also trying to use GPU to accelerate LVGL where my hardware is STM32MP2 which is linux based system with OPENGLESv2 support.
i have already successfully intereface LVGl with Wayland, SDL and DRM with default example provided by lvgl repository. But these every configuration is software based or software rendering i think so.
while i activate LV_USE_OPENGLES and LV_USE_EGL in lv_config.h , i can compile but application dont run. nothing display.
in short with GPU (OPENGLES) failed to interface LVGL and i have not find any example or demo with this configuration.
Can you please give me a update on your progress? have you reach to use GPU for MPU in embedded linux ?
thank you.
note: i also read some blogs, or may be official documents thats LVGL is not support advanced GPU still now forusing in MPU. only for desktop simulation. i can be wrong.!!
I don’t use DRM; I just want to speed up LVGL rendering. My latest conclusion is that the current EGL is not suitable for my needs, so I’ve stopped pursuing it.
Hi @AndreCostaaa,
Thanks for your reply. I have tried your shared documentation. there are two different configuration for drm.
DRM with GBM = Running perfectly at 30 FPS with 50% CPU load on average as expected as there are no hardware acceleration
DRM with EGL = using same main.c from DRM but this time nothing display.
i get error “Failed to create egl context” and “failded to load egl” while i changed color format in file “lv_opengles_egl.c” ARGB8888 to XRGB8888 these error disappears