LVGL and 4bit grayscale

Description

I’m using LVGL and Zephyr v2.7 to drive a 128x32 resolution OLED display. The driver (CH1120) uses 4-bit grayscale, so every byte contains two pixels (16 values of grayscale per pixel). I don’t expect the Zephyr / LVGL Kconfig flags to support this off the bat, but am curious what current_pixel_format and screen_info variables should be set to for the closest match, at which point I can write an abstraction layer to convert the byte array generated by the LVGL code (in whatever RGB* format), to a byte array in the format I need.

What MCU/Processor/Board and compiler are you using?

Custom HW driven by an STM32L552 MCU. Using Zephyr v2.7. OLED driver is the CH1120 from Chip Wealth Technology.

What do you want to achieve?

Right now, just basic functionality. The OLED is a 128x32 monochrome display. The driver uses 4-bit grayscale.

What have you tried so far?

LVGL code compiles and runs on the HW. But the OLED screen shows garbage.

More Info

I’ve create an lv_label and aligned the object. Every second, the text should update with an incremental number (this follows sample code).
I’ve written the custom ch1120 device driver that uses the Zephyr display_driver_api in order to interact with the LVGL code. In my ‘write’ function, incoming data has a specific x and y coordinate, along with pitch, width, height, and buffer length. Where is this buffer information generated? Currently, I have a width of 32 and a height of 8. According to my display driver, I would need a buffer length of 128 bytes (for 256 pixels (32 x 8)). Instead the incoming buffer length is 32, and the data shown on the OLED only changes 64 pixels. How do I go about manipulating the buffer data to fit the grayscale and pixel format needed by the CH1120?

Thank you,
Rob