glTF + 3D Support: Load your glTF 3D models directly into LVGL UIs
GStreamer Support: The standard Linux video player and camera interface
DRM + EGL Driver: Super high-performance Linux display driver
Arm NEON Optimization: Up to 33% faster software rendering
GPU updates
EVE GPU Renderer: Let EVE chips render the UI using SPI commands
ESP PPA: 30% faster rendering and 30% lower CPU usage on ESP32-P4
NemaVG: SVG support for ST’s NeoChrom and compatible GPUs
VGLite Driver Unified : A single driver for all platforms with a consistent feature set
Dave2D: Lower CPU usage on Renesas and Alif platforms
Other features
GIF Speed Up: Replaced with a version that’s 3x faster
FrogFS Support: Pack directory trees into a single “blob” and load them at runtime
Better Docs: Reorganized content, improved coverage, and a fresh new design: https://docs.lvgl.io/
v9.4 is already available as:
Arduino library
PlatformIO Labs
Espressif Systems’s component Registry
Arm’s CMSIS-Pack
And will be updated in Zephyr in a few days too
What’s next?
Of course, we’re already planning v9.5 Instead of huge new features, we’d like to focus on improving what we already have in LVGL:
Review, improve, and add even more documentation
Deprecate old features where there are better alternatives
Add more MPU drivers and 3D/OpenGL support
Improve charts
Improve the XML engine
And maybe finally add the long-promised blur support
Let us know how v9.4 works for you, what you think of the new features, and what you’d like to see in v9.5. Just comment here, open a GitHub issue, or write to us at lvgl@lvgl.io
DITHERING!!! add in dithering for RGB565… That is something that is really needed to make things really look good. The ability to turn it on and off real time would be really useful so a person can turn it on when they know that something is being done where it is needed and then turn it off so performance isn’t being impacted.
That I implemented? I didn’t add any dithering code to LVGL. maybe someone else did and I missed it?
It would be nice to have the ability to dither on the fly for images and gradients alike. There are images that get displayed using a UI that are from unknown sources and the ability to dither them might not be available at the time the images are displayed. If the dithering was built into LVGL it could be done a whole lot easier than it could be by using external code in the flush function in the use case of using the code that I provided as an example. This is because that dithering code utilizes the x and y coordinates of a pixel in order to determine the offset factors for the colors. It gets a tad bit more complex when working with partial buffers to handle that kind of thing instead of doing it at the point in time that the section of the image data gets copied to the frame buffer. That also is going to impact performance because of the need to iterate over the frame buffer to perform the dither. Also when doing it at the frame buffer level the entire buffer would need to be dithered instead of only the areas that would actually need it to be done. IE a gradient background that has other objects on top of it. Now you end up dithering everything instead of just the background.
From a performance standpoint the best way is to have it built into LVGL. It’s not a large amount of code and the memory use is able to be mitigated by allocating the color offset array when they need to be used and then freeing the memory when they are not needing to be used. This is another reason as to why it should be an option chosen at runtime instead of compile time.
I do remember an attempt to add the dithering to LVGL but last I knew is it consumed more memory than the code example I had provided and it never really worked well. Maybe it was fixed?
Well, I should have say “explored”. Anyways, I see your point. Maybe it simpler as it seems:
We call functions like lv_draw_sw_grad_radial_get_line to get a line from the gradient.
At end of these function the line is ready as a color array
From state->clip_area we always know the absolute coordinates of the gradient. So we can calculate the gradient relative to the gradient itself, not to the screen.
The thing is I believe that there is some kind of dithering that has already been added into the gradient code. It just doesn’t work like it should so we really don’t want to be adding more code to do dithering again because that is simply increasing the workload. If the dithering that has been added is removed then I would be able to help out with adding a dithering algorithm that will produce what it should be doing.
So long as we can get the coordinates that are relative to the gradient as a whole and those can be passed in then we should be able to add the dithering. That being said if the gradient line that gets returned ends up being used to pick and choose colors from it instead of using it as a whole as it sits then there is going to be an issue. I believe that is actually what is taking place currently and it might be the reason why the dithering doesn’t come out properly It’s because the function that generated the gradient line data has no clue where the line is being used or if the colors are being selectively chosen. The other thing is there is a hard limit of the possible shades from one color to the next which I believe is 255. because of this you are going to get banding that I do not believe that dithering is going to be able to filter out.
You can see that hard limit being used in each of the gradient types
That hard limit that is set is pretty restrictive when it comes to gradients especially if that gradient is having colors cherry picked out of it and or there are multiple stops with different colors in it.
Great news - especially about the 30% lower CPU usage on ESP32-P4.
I burnt my finger on the ESP32-P4 chip when running my XY table driver; And the LM7805 voltage reducer from 12v to 5c couldn’t cope even with a big heat sink - it had to be replaced by a buck converter.
I’m just hoping that the check box and slider switch will behave with the new version.
You do realize that supply is not big enough right?.. That thing has a max output current of 1500ma. The GPIO’s alone on the P4 will consume that much power. Each core needs to have 500ma supplied to it. You should be using as a minimum a 3000ma supply. You are running that thing to it’s max almost all the time which is why it’s getting as hot as it is. Gotta looks at the electrical specs on things. I have seen board manufacturers use more than one supply for it.It’s not a low power SOC by any stretch that’s for sure. Not like the ESP32 where you can get away with running it with a 500ma supply.
There and the tables in the pages after that. It states
Ioutput 2 Cumulative IO output current — 1500 mA
IV DD Current supplied to core 0.5 — — A
IV F B Output current when VDDO_FLASH is powered by Flash Voltage Regulator for 1.8 V flash 50 mA
IOH High-level source current (VDD1 = 3.3 V, VOH >= 2.64 V, PAD_DRIVER = 3) — 40 — mA
IOL Low-level sink current (VDD1 = 3.3 V, VOL = 0.495 V, PAD_DRIVER = 3) — 28 — mA
WAITI (Dual core in idle state) 35 65
Dual-core while(1) loop operation 80 103
Single core running CoreMark instructions, the other core in idle state 70 92
Dual core running 32-bit data access instructions 92 123
when you add up the values you get 2,895ma
and the regulator you are using…
It says output current up to 1.5amps.
You start connecting displays and other IC’s even a 3amp regulator is really not where it needs to be to run the P4 and any additional components. Now I know the numbers for the P4 are their maximum’s but that is how the supply should be sized.
You should be able to “drive it like you stole it” so to speak and to be able to do that you need to have the power available and it has to be reliable power as well. Over engineer and under use using quality components and it will last for a very long time using only passive cooling which may not even be necessary.
We are thinking about creating higher level components, just like the pullable header with lvgl pro. In connection with that we are also working on a drag and drop interface for lvgl pro. Do you should be able to easily put together complex UIs from ready to use components.
It will take some time to get there, but we are on it.
Many thanks for your reply. I have a 3000ma 12v power supply. The ammeter was showing only 600ma @5v drain running the P4 and the LCD display so I used the LM7805. It was replaced by a buck converter and all is now well. The P4 gets warm but not hot enough to need a heat sink. The two stepper motors on the XY table draw around 500ma each @ 12v so all is within spec. The same APP drives a Star Tracker when not in use as an XY table. The tracker uses a much smaller motor and has a much lower current drain. A 6AH 12v battery more than last long enough for astral photography.
So using a stronger power supply fixed the issue? Hopefully it does.
The current draw all depends on what is being used… processor, ram and GPIO’s so the best way to go about picking a supply is to go by the max current tables and be above that in order to ensure there is not going to be an issue. That’s what I do in my designs. If the use is going to be where adding some passive cooling cannot be done (space constraints) I will typically get a power supply that has almost double what the draw is so it will run cooler. The supply will typically be larger but not as large as a supply with a heatsink on it.