How can I store a JPG using micropython and LVGL snapshot?

Greetings,

I have written some code in C for LVGL before, and there I used a library from 100askTeam (GitHub - 100askTeam/lv_lib_100ask: lv_lib_100ask is a reference for various out of the box schemes based on lvgl library or an enhanced interface for various components of lvgl library.) to take a screenshot and save it as JPG.
That library uses the tiny_jpeg library to convert the raw image data into JPG format.

Recently I started porting my code to micropython to make development easier.

Are there any similar libraries like tiny_jpeg to convert the snapshot format into JPG?

Does anyone have experience with saving the snapshot format into JPG?

My code is solely intended to be run on the LVGL simulator for PC (linux), so I am not concerned with hardware/driver constraints.

Currently my approach is to store the LVGL snapshot format into a raw file, like this:

def take_screenshot(container, output_file):
    snapshot = lv.snapshot_take(container, lv.COLOR_FORMAT.NATIVE)
    data_size = snapshot.data_size
    buffer = snapshot.data.__dereference__(data_size)
    with open(output_file, 'wb') as f:
        f.write(buffer)

And then just use a regular python library to encode the raw file as JPG.
But this is cumbersome and having a native micropython output for JPG would make my life a whole lot easier.

Currently I am converting the format with this native python code:

import numpy as np
from PIL import Image

# Constants for image dimensions
WIDTH, HEIGHT = 420, 320

# Load the raw RGB565 data from the file
with open('screenshot.raw', 'rb') as file:
    raw_data = file.read()

# Convert raw RGB565 data to a NumPy array
image_data_565 = np.frombuffer(raw_data, dtype=np.uint16).reshape((HEIGHT, WIDTH))

# Function to convert RGB565 to RGB888
def rgb565_to_rgb888(rgb565):
    # Mask out the components
    r = (rgb565 & 0xF800) >> 11
    g = (rgb565 & 0x07E0) >> 5
    b = (rgb565 & 0x001F)
    
    # Convert the components to 8-bit values
    r = (r * 255) // 31
    g = (g * 255) // 63
    b = (b * 255) // 31
    return r, g, b

# Create an empty array for RGB888 data
rgb888_data = np.zeros((HEIGHT, WIDTH, 3), dtype=np.uint8)

# Convert RGB565 to RGB888
for y in range(HEIGHT):
    for x in range(WIDTH):
        # Get the pixel in RGB565 format
        rgb565_pixel = image_data_565[y, x]
        
        # Convert it to RGB888
        r, g, b = rgb565_to_rgb888(rgb565_pixel)
        
        # Place it in the new array
        rgb888_data[y, x] = [r, g, b]

# Create the image using Pillow
img = Image.fromarray(rgb888_data, 'RGB')

# Save the image
img.save('screenshot.jpg')

But I am not at all pleased of having to run the micropython code first and then having to post-process all the screenshots I make.
It would make things easier if my micropython code would be able to produce the JPG, but I haven’t found any applicable micropython libraries to encode JPG.

For some context on my project:
I am writing a random user interface generator, intended to be run with the LVGL simulator for PC.
This generator produces an image file and a .txt containing YOLO formatted annotations of all the LVGL objects inside of the screenshot.

The produced data is then used for a machine learning project which can analyze images and detect LVGL objects accurately.

The project can be checked out here, if you’re interested:

Easiest way. Borrow someone else’s code to encode the JPEG

@kdschlosser
Yup, I do love the easy way, thanks for that!

Gonna try it out in a sec.

I can’t get it to work unfortunately, it fails on import:

Traceback (most recent call last):
  File "./src/main.py", line 2, in <module>
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/src/screenshot.py", line 2, in <module>
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/src/jpeg.py", line 2, in <module>
  File "struct.py", line 4, in <dictcomp>
  File "struct.py", line 1, in Struct
NameError: name '__int__' isn't defined

I have installed copy and struct by doing:

import mip
mip.install("copy")
mip.install("struct")

I figured out that I should not have done that (mip.install(...)) and I’ve removed the created .micropython directory from my home folder.

By only copying the encoding part of the jpeg library directly into my script, it works as intended.

Here is the full script if anyone needs screenshots:

import lvgl as lv
from struct import pack
# import jpeg

# copy-pasta from jpeg
_z_z = bytes([ # Zig-zag indices of AC coefficients
         1,  8, 16,  9,  2,  3, 10, 17, 24, 32, 25, 18, 11,  4,  5,
    12, 19, 26, 33, 40, 48, 41, 34, 27, 20, 13,  6,  7, 14, 21, 28,
    35, 42, 49, 56, 57, 50, 43, 36, 29, 22, 15, 23, 30, 37, 44, 51,
    58, 59, 52, 45, 38, 31, 39, 46, 53, 60, 61, 54, 47, 55, 62, 63])




_luminance_quantization = bytes([ # Luminance quantization table in zig-zag order
    16, 11, 12, 14, 12, 10, 16, 14, 13, 14, 18, 17, 16, 19, 24, 40,
    26, 24, 22, 22, 24, 49, 35, 37, 29, 40, 58, 51, 61, 60, 57, 51,
    56, 55, 64, 72, 92, 78, 64, 68, 87, 69, 55, 56, 80,109, 81, 87,
    95, 98,103,104,103, 62, 77,113,121,112,100,120, 92,101,103, 99])
_chrominance_quantization = bytes([ # Chrominance quantization table in zig-zag order
    17, 18, 18, 24, 21, 24, 47, 26, 26, 47, 99, 66, 56, 66, 99, 99,
    99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99,
    99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99,
    99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99])

_ld_lengths = bytes([ # Luminance DC code lengths
    0, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0])
_ld_values = bytes([ # Luminance DC values
    0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
_la_lengths = bytes([ # Luminance AC code lengths
    0, 2, 1, 3, 3, 2, 4, 3, 5, 5, 4, 4, 0, 0, 1, 125])
_la_values = bytes([ # Luminance AC values
      1,  2,  3,  0,  4, 17,  5, 18, 33, 49, 65,  6, 19, 81, 97,  7, 34,113,
     20, 50,129,145,161,  8, 35, 66,177,193, 21, 82,209,240, 36, 51, 98,114,
    130,  9, 10, 22, 23, 24, 25, 26, 37, 38, 39, 40, 41, 42, 52, 53, 54, 55,
     56, 57, 58, 67, 68, 69, 70, 71, 72, 73, 74, 83, 84, 85, 86, 87, 88, 89,
     90, 99,100,101,102,103,104,105,106,115,116,117,118,119,120,121,122,131,
    132,133,134,135,136,137,138,146,147,148,149,150,151,152,153,154,162,163,
    164,165,166,167,168,169,170,178,179,180,181,182,183,184,185,186,194,195,
    196,197,198,199,200,201,202,210,211,212,213,214,215,216,217,218,225,226,
    227,228,229,230,231,232,233,234,241,242,243,244,245,246,247,248,249,250])
_cd_lengths = bytes([ # Chrominance DC code lengths
    0, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0])
_cd_values = bytes([ # Chrominance DC values
    0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
_ca_lengths = bytes([ # Chrominance AC code lengths
    0, 2, 1, 2, 4, 4, 3, 4, 7, 5, 4, 4, 0, 1, 2, 119])
_ca_values = bytes([ # Chrominance AC values
      0,  1,  2,  3, 17,  4,  5, 33, 49,  6, 18, 65, 81,  7, 97,113, 19, 34,
     50,129,  8, 20, 66,145,161,177,193,  9, 35, 51, 82,240, 21, 98,114,209,
     10, 22, 36, 52,225, 37,241, 23, 24, 25, 26, 38, 39, 40, 41, 42, 53, 54,
     55, 56, 57, 58, 67, 68, 69, 70, 71, 72, 73, 74, 83, 84, 85, 86, 87, 88,
     89, 90, 99,100,101,102,103,104,105,106,115,116,117,118,119,120,121,122,
    130,131,132,133,134,135,136,137,138,146,147,148,149,150,151,152,153,154,
    162,163,164,165,166,167,168,169,170,178,179,180,181,182,183,184,185,186,
    194,195,196,197,198,199,200,201,202,210,211,212,213,214,215,216,217,218,
    226,227,228,229,230,231,232,233,234,242,243,244,245,246,247,248,249,250])

def _quantization_table(table, quality):
    quality = max(0, min(quality, 100))
    if quality < 50:
        q = 5000//quality
    else:
        q = 200 - quality*2
    return bytes([max(1, min((i*q + 50)//100, 255)) for i in table])

def _huffman_table(lengths, values):
    table = [None]*(max(values) + 1)
    code = 0
    i = 0
    size = 1
    for a in lengths:
        for j in range(a):
            table[values[i]] = code, size
            code += 1
            i += 1
        code *= 2
        size += 1
    return table

def _scale_factor(table):
    factor = [0]*64
    factor[0] = table[0]*8
    i = 1
    for z in _z_z:
        factor[z] = table[i]*8
        i += 1
    return factor

def _marker_segment(marker, data):
    return b'\xff' + marker + pack('>H', len(data) + 2) + data

def _forward_dct(block):
    # Ref.: Independent JPEG Group's "jfdctint.c", v8d
    # Copyright (C) 1994-1996, Thomas G. Lane
    # Modification developed 2003-2009 by Guido Vollbeding
    for i in range(0, 64, 8):
        tmp0 = block[i] + block[i+7]
        tmp1 = block[i+1] + block[i+6]
        tmp2 = block[i+2] + block[i+5]
        tmp3 = block[i+3] + block[i+4]
        tmp10 = tmp0 + tmp3
        tmp12 = tmp0 - tmp3
        tmp11 = tmp1 + tmp2
        tmp13 = tmp1 - tmp2
        tmp0 = block[i] - block[i+7]
        tmp1 = block[i+1] - block[i+6]
        tmp2 = block[i+2] - block[i+5]
        tmp3 = block[i+3] - block[i+4]
        block[i] = (tmp10 + tmp11 - 8*128) << 2 # PASS1_BITS
        block[i+4] = (tmp10 - tmp11) << 2
        z1 = (tmp12 + tmp13)*4433 # FIX_0_541196100
        z1 += 1024 # 1 << (CONST_BITS-PASS1_BITS-1)
        block[i+2] = (z1 + tmp12*6270) >> 11 # FIX_0_765366865
        block[i+6] = (z1 - tmp13*15137) >> 11 # FIX_1_847759065
        tmp10 = tmp0 + tmp3
        tmp11 = tmp1 + tmp2
        tmp12 = tmp0 + tmp2
        tmp13 = tmp1 + tmp3
        z1 = (tmp12 + tmp13)*9633 # FIX_1_175875602
        z1 += 1024 # 1 << (CONST_BITS-PASS1_BITS-1)
        tmp0 = tmp0*12299 # FIX_1_501321110
        tmp1 = tmp1*25172 # FIX_3_072711026
        tmp2 = tmp2*16819 # FIX_2_053119869
        tmp3 = tmp3*2446 # FIX_0_298631336
        tmp10 = tmp10*-7373 # FIX_0_899976223
        tmp11 = tmp11*-20995 # FIX_2_562915447
        tmp12 = tmp12*-3196 # FIX_0_390180644
        tmp13 = tmp13*-16069 # FIX_1_961570560
        tmp12 += z1
        tmp13 += z1
        block[i+1] = (tmp0 + tmp10 + tmp12) >> 11
        block[i+3] = (tmp1 + tmp11 + tmp13) >> 11
        block[i+5] = (tmp2 + tmp11 + tmp12) >> 11
        block[i+7] = (tmp3 + tmp10 + tmp13) >> 11
    for i in range(8):
        tmp0 = block[i] + block[i+56]
        tmp1 = block[i+8] + block[i+48]
        tmp2 = block[i+16] + block[i+40]
        tmp3 = block[i+24] + block[i+32]
        tmp10 = tmp0 + tmp3 + 2 # 1 << (PASS1_BITS-1)
        tmp12 = tmp0 - tmp3
        tmp11 = tmp1 + tmp2
        tmp13 = tmp1 - tmp2
        tmp0 = block[i] - block[i+56]
        tmp1 = block[i+8] - block[i+48]
        tmp2 = block[i+16] - block[i+40]
        tmp3 = block[i+24] - block[i+32]
        block[i] = (tmp10 + tmp11) >> 2 # PASS1_BITS
        block[i+32] = (tmp10 - tmp11) >> 2
        z1 = (tmp12 + tmp13)*4433 # FIX_0_541196100
        z1 += 16384 # 1 << (CONST_BITS+PASS1_BITS-1)
        block[i+16] = (z1 + tmp12*6270) >> 15 # FIX_0_765366865, CONST_BITS+PASS1_BITS
        block[i+48] = (z1 - tmp13*15137) >> 15 # FIX_1_847759065
        tmp10 = tmp0 + tmp3
        tmp11 = tmp1 + tmp2
        tmp12 = tmp0 + tmp2
        tmp13 = tmp1 + tmp3
        z1 = (tmp12 + tmp13)*9633 # FIX_1_175875602
        z1 += 16384 # 1 << (CONST_BITS+PASS1_BITS-1)
        tmp0 = tmp0*12299 # FIX_1_501321110
        tmp1 = tmp1*25172 # FIX_3_072711026
        tmp2 = tmp2*16819 # FIX_2_053119869
        tmp3 = tmp3*2446 # FIX_0_298631336
        tmp10 = tmp10*-7373 # FIX_0_899976223
        tmp11 = tmp11*-20995 # FIX_2_562915447
        tmp12 = tmp12*-3196 # FIX_0_390180644
        tmp13 = tmp13*-16069 # FIX_1_961570560
        tmp12 += z1
        tmp13 += z1
        block[i+8] = (tmp0 + tmp10 + tmp12) >> 15 # CONST_BITS+PASS1_BITS
        block[i+24] = (tmp1 + tmp11 + tmp13) >> 15
        block[i+40] = (tmp2 + tmp11 + tmp12) >> 15
        block[i+56] = (tmp3 + tmp10 + tmp13) >> 15


class _entropy_encoder(object):
    
    def __init__(self):
        c = [i for j in reversed(range(16)) for i in range(1 << j)]
        s = [j for j in range(1, 16) for i in range(1 << (j - 1))]
        s = [0] + s + list(reversed(s))
        self.codes, self.sizes = c, s
        self.value, self.length = 0, 0
        self.data = bytearray()
    
    def encode(self, previous, block, scale, dc, ac):
        _forward_dct(block)
        for i in range(64):
            block[i] = (((block[i] << 1)//scale[i]) + 1) >> 1
        d = block[0] - previous
        if d == 0:
            self.write(*dc[0])
        else:
            s = self.sizes[d]
            self.write(*dc[s])
            self.write(self.codes[d], s)
        n = 0
        for i in _z_z:
            if block[i] == 0:
                n += 1
            else:
                while n > 15:
                    self.write(*ac[0xf0])
                    n -= 16
                s = self.sizes[block[i]]
                self.write(*ac[n*16 + s])
                self.write(self.codes[block[i]], s)
                n = 0
        if n > 0:
            self.write(*ac[0])
        return block[0]
    
    def write(self, value, length):
        data = self.data
        value += (self.value << length)
        length += self.length
        while length > 7:
            length -= 8
            v = (value >> length) & 0xff
            if v == 0xff:
                data.append(0xff)
                data.append(0)
            else:
                data.append(v)
        self.value = value & 0xff
        self.length = length
    
    def dump(self):
        return self.data

class image():
    def __init__(self, width: int, height: int, kind: str, data: bytes):
        if kind not in ('g', 'rgb', 'cmyk'):
            raise ValueError('Invalid image kind.')
        self.width = width
        self.height = height
        self.kind = kind
        self.n = 1 if kind == 'g' else 3 if kind == 'rgb' else 4
        self.data = data

def serialize(image, quality):
    w, h, n, data = image.width, image.height, image.n, image.data
    ydc = udc = vdc = kdc = 0
    yblock, ublock, vblock, kblock = [0]*64, [0]*64, [0]*64, [0]*64
    lq = _quantization_table(_luminance_quantization, quality)
    ld = _huffman_table(_ld_lengths, _ld_values)
    la = _huffman_table(_la_lengths, _la_values)
    ls = _scale_factor(lq)
    if n == 3:
        cq = _quantization_table(_chrominance_quantization, quality)
        cd = _huffman_table(_cd_lengths, _cd_values)
        ca = _huffman_table(_ca_lengths, _ca_values)
        cs = _scale_factor(cq)
    e = _entropy_encoder()
    for y in range(0, h, 8):
        for x in range(0, w, 8):
            i = 0
            for yy in range(y, y + 8):
                for xx in range(x, x + 8):
                    j = (min(xx, w - 1) + min(yy, h - 1)*w)*n
                    if n == 1:
                        yblock[i] = data[j]
                    elif n == 3:
                        r, g, b = data[j], data[j + 1], data[j + 2]
                        yblock[i] = (19595*r + 38470*g + 7471*b + 32768) >> 16
                        ublock[i] = (-11056*r - 21712*g + 32768*b + 8421376) >> 16
                        vblock[i] = (32768*r - 27440*g - 5328*b + 8421376) >> 16
                    else: # n == 4
                        yblock[i] = data[j]
                        ublock[i] = data[j + 1]
                        vblock[i] = data[j + 2]
                        kblock[i] = data[j + 3]
                    i += 1
            ydc = e.encode(ydc, yblock, ls, ld, la)
            if n == 3:
                udc = e.encode(udc, ublock, cs, cd, ca)
                vdc = e.encode(vdc, vblock, cs, cd, ca)
            elif n == 4:
                udc = e.encode(udc, ublock, ls, ld, la)
                vdc = e.encode(vdc, vblock, ls, ld, la)
                kdc = e.encode(kdc, kblock, ls, ld, la)
    e.write(0x7f, 7) # padding
    app = b'Adobe\0\144\200\0\0\0\0' # tag, version, flags0, flags1, transform
    sof = b'\10' + pack('>HHB', h, w, n) + b'\1\21\0' # depth, id, sampling, qtable
    sos = pack('B', n) + b'\1\0' # id, htable
    dqt = b'\0' + lq
    dht = b'\0' + _ld_lengths + _ld_values + b'\20' + _la_lengths + _la_values
    if n == 3:
        sof += b'\2\21\1\3\21\1'
        sos += b'\2\21\3\21'
        dqt += b'\1' + cq
        dht += b'\1' + _cd_lengths + _cd_values + b'\21' + _ca_lengths + _ca_values
    elif n == 4:
        sof += b'\2\21\0\3\21\0\4\21\0'
        sos += b'\2\0\3\0\4\0'
    sos += b'\0\77\0' # start, end, approximation
    return b''.join([
        b'\xff\xd8', # SOI
        _marker_segment(b'\xee', app) if n == 4 else b'',
        _marker_segment(b'\xdb', dqt),
        _marker_segment(b'\xc0', sof),
        _marker_segment(b'\xc4', dht),
        _marker_segment(b'\xda', sos),
        e.dump(),
        b'\xff\xd9']) # EOI

def bgr_to_rgb(data):
    # Assume data is a flat bytearray in BGR format
    for i in range(0, len(data), 3):
        data[i], data[i+2] = data[i+2], data[i]  # Swap the B and R values
    return data


def take_screenshot(container, output_file):
    snapshot = lv.snapshot_take(container, lv.COLOR_FORMAT.NATIVE)
    data_size = snapshot.data_size
    buffer = snapshot.data.__dereference__(data_size)
    img = image(container.get_width(), container.get_height(), "rgb", bgr_to_rgb(buffer))
    with open(output_file, 'wb') as f:
        f.write(serialize(img, 100))

Keep in mind that I have compiled LVGL micropython using
#define LV_USE_STDLIB_MALLOC LV_STDLIB_CLIB
My intention is to run it only on the Unix port anyways. Running this on hardware might actually not be feasible memory-wise.

HEYYYYY you marked the wrong post as the solution!! LOL…I’m joking. It did work for you tho which is a good thing. I didn’t know what port you were using and I didn’t check to see what the memory use would be. The code could be added into it’s own source file and the import done inside of a function call to create the JPG file at the end of the function you would delete the imported module and also delete the imported module from sys.modules and call gc.collect to clean up the memory use. That would effectively unload the module until the next time you needed to create a jpg file. This would keep the moule from being resident in memory between uses because of all of the large arrays that are stored at the module level that eat up a lot of memory.

Another option would be to move all of those arrays into a function so the memory for those arrays gets allocated on the stack so when the function exits the arrays get released. You would still have the memory use for the large amount of code that get loaded but it wouldn’t be as much as having the arrays defined at the module level.

another option is to use array.array instead of bytes. I believe that an array.array uses less memory. You would have to check to see if that is the case.

Since you are using the unix port I could write a simple user c module that would create a micropython binding to one of the many JPEG encoders that are available for unix. Not too much more complex to do and it would be faster that’s for sure. LVGL is designed around playback and not saving. so decoder functions are available but not encoder functions.

libjpeg-turbo is written in C and I might be able to automatically generate the code needed to be able to access it from inside of micropython.

what you would do is something along these lines.

You would clone lv_mictopython and then in that folder create another folder “jpeg” and in that folder create anotrher called “jpeg” go into the second jpeg folder and create a file named micropython.mk and in that file you put this code.

################################################################################
# JPEG build rules

MOD_DIR := $(USERMOD_DIR)

LVGL_BINDING_DIR = $(MOD_DIR)/../../lib/lv_bindings
JPEG_PP = $(BUILD)/jpeglib/jpeg.pp.c
JPEG_MPY = $(BUILD)/jpeglib/jpeg_mpy.c
JPEG_MPY_METADATA = $(BUILD)/jpeglib/jpeg_mpy.json
JPEG_HEADER = jpeglib.h

$(JPEG_MPY): $(JPEG_HEADER) $(LVGL_BINDING_DIR)/gen/gen_mpy.py 
	$(ECHO) "JPEG-GEN $@"
	$(Q)mkdir -p $(dir $@)
	$(Q)$(CPP) $(CFLAGS_USERMOD) -DPYCPARSER -x c -I $(LVGL_BINDING_DIR)/pycparser/utils/fake_libc_include $(JPEG_HEADER) > $(JPEG_PP)
	$(Q)$(PYTHON) $(LVGL_BINDING_DIR)/gen/gen_mpy.py -M jpeg -MP jpeg -MD $(JPEG_MPY_METADATA) -E $(JPEG_PP) $(JPEG_HEADER) > $@
	
	
.PHONY: JPEG_MPY
JPEG_MPY: $(JPEG_MPY)

SRC_USERMOD_C += $(JPEG_MPY)
LDFLAGS_USERMOD += -llibjpeg

install the libjpeg package

Ubuntu

sudo apt-get install libjpeg-turbo8-dev

do the typical make submodules for the unix port and when you want to compile add the following to the end of the make command.

USER_C_MODULES="{lv_micropython_path}/jpeg"

replacing {lv_micropython_path} with the absolute path to the lv_micropython folder.

That should generate the code and compile libjpeg-turbo so it can be accessed from inside of MicroPython. The module name is going to be jpeg. any of the API that begins with “jpeg_” is going to have that removed from it so if there was a function named “jpeg_create” you would access using

import jpeg

jpeg.create

The information above is not tested but it will give you the general idea of how to go about doing it.

I am not the one that designed the build system in micropython and the whole thing with needing to to make a folder inside another for the user c module is for the purpose of you being able to compile multiple user c files. The build script in micropython looks for “micropython.mk” files in the child folders of the path specified in the USER_C_MODULES make command.

1 Like

That’s great!

I guess I learned a little thing about micropython modules today, didn’t expect that.

But yea, the way I did it now really works fine for me, although later I might switch to what you suggested due to speed, since that generator will be doing this a lot & repeatedly, and shaving off a few seconds here and there might prove yourself.

As of right now, I’m fine with it working the way it is, since I need to focus my attention on the LVGL side of things to create random UIs.

This would keep the moule from being resident in memory between uses because of all of the large arrays that are stored at the module level that eat up a lot of memory.

I felt like it might not be feasible memory-wise since it uses ALOT of extra bytes during encoding, and a device might not have enough to begin with to even create the JPG.

That all depends on if it is able to do an in-place conversion to the JPEG format. You might need to make the buffer a little bit larger to accommodate the header data. I know there is compression involved s in the end the data would be smaller but there needs to be a place to put things while the compression it taking place. It would be less memory use to make the buffer say 10% or 15% larger then to make a complete second buffer to compress to. Since the data is being written to a file then buffer could be zeroed after the data has been written and then filled again when the next snapshot is taken.

I am not sure why you are doing the snap shots, you might want to take a look at this…

I wrote this script and what it does is it runs a unix compiled micropython leveraging stdin and stdout to send data between the running MicroPython and a CPython script. The CPython script is what runs MIcroPython using subprocess. The frame buffer data instread of going to a display gets hexlified and printed out so the CPython script can read it unhexlify the data back into the RGB frame buffer data and I use the Python Imaging Library (PIL) to write the raw RGB data to a png file. That image is also used for comparison to ensure the data that is in the frame buffer aligns with what we have stored. If the 2 don’t match the test fails.

It’s actually pretty damned ingenious what I did because it also captures actual tracebacks from MicroPython as well. So no code had to be added to handle specific types of errors that can occur for each and every single thing. If an error occurs that error gets pushed onto CPython so the exact traceback is able to be seen. Make it easier to locate problems.

It is partially set up so that more than one test is able to be run. I have the full code for that here on my local machine. For LVGL purposes only the single screen shot needed to be compared but eventually when we implement a full test suite for MicroPython this framework that I wrote is going to be able to handle all of the tests really easily. It is able to capture the buffer data and build an animated PNG with it so even animations can be tested to ensure the output is working properly.

1 Like

I am not sure why you are doing the snap shots, you might want to take a look at this…

=> GitHub - HackXIt/lvgl_ui_generator_v2: A project to generate random user interfaces using the LVGL simulator for PC (v2 based on micropython)

In short, I want to create somewhat realistic UI designs to generate snapshots of, to feed that data into a machine learning model, so that it can learn to classify and locate LVGL widgets.

The UI designs somewhat need to resemble of what is possible to create, and also allow for enough diversification to be useful for training.

OK so it sounds like you are only running this on linux and I believe the best way to go about doing it is by using something along the lines of what I wrote in that link I provided in my last post. The script that cuns does so in CPython. It runs a subprocess of MicroPython where I am leveraging stdin and stdout for that subprocess. I am able to feed code in using the stdin of that subprocess and then capture the frame buffer data using stdout of the subproocess. This way I am able to handle the raw frame buffer data in CPython where there is a full featured set of libraries available. Things like PIL (Python Imaging Library aka “Pillow”). with PIL it is very easy to save the raw buffer data into a plethora of different image formats.

I am fine with my current approach, I only require what I already have now.

In the future I might look into your code, but for the LVGL part I really just needed a “comfortable” way of creating UIs, which is why I’ve built the generator.

Your approach sounds nice too, being able to feed in code and all, but it would still require me to write an actual UI in code, which is a bit out-of-scope of what I need at the moment.
I will definitely consider it after finishing my BA though.

I tried applying your suggested solution with the jpeg/jpeg folder inside lv_micropython and the created micropython.mk, and I am forced to use libjpeg62-turbo-dev, but I am encountering very annoying issues with libc when compiling:

micropython.mk

################################################################################
# JPEG build rules

MOD_DIR := $(USERMOD_DIR)

LVGL_BINDING_DIR = $(MOD_DIR)/../../lib/lv_bindings
JPEG_PP = $(BUILD)/jpeglib/jpeg.pp.c
JPEG_MPY = $(BUILD)/jpeglib/jpeg_mpy.c
JPEG_MPY_METADATA = $(BUILD)/jpeglib/jpeg_mpy.json

# Path to the existing fake libc include directory
FAKE_LIBC_INCLUDE = $(LVGL_BINDING_DIR)/pycparser/utils/fake_libc_include

# Use the full path for the JPEG header
JPEG_HEADER = /usr/include/jpeglib.h

$(JPEG_MPY): $(JPEG_HEADER) $(LVGL_BINDING_DIR)/gen/gen_mpy.py 
	$(ECHO) "JPEG-GEN $@"
	$(Q)mkdir -p $(dir $@)
	$(Q)$(CPP) $(CFLAGS_USERMOD) -nostdinc -DPYCPARSER -x c \
		-I $(FAKE_LIBC_INCLUDE) \
		$(JPEG_HEADER) > $(JPEG_PP)
	$(Q)$(PYTHON) $(LVGL_BINDING_DIR)/gen/gen_mpy.py -M jpeg -MP jpeg -MD $(JPEG_MPY_METADATA) -E $(JPEG_PP) $(JPEG_HEADER) > $@
	
.PHONY: JPEG_MPY
JPEG_MPY: $(JPEG_MPY)

SRC_USERMOD_C += $(JPEG_MPY)
LDFLAGS_USERMOD += -ljpeg

# Debugging: Print the include path and files
.PHONY: debug
debug:
	@echo "FAKE_LIBC_INCLUDE: $(FAKE_LIBC_INCLUDE)"
	@echo "JPEG_HEADER: $(JPEG_HEADER)"
	@echo "LVGL_BINDING_DIR: $(LVGL_BINDING_DIR)"
	@echo "FILES: $(wildcard $(JPEG_HEADER))"

Compile errors:

$ make -C ports/unix USER_C_MODULES="$(pwd)/jpeg" V=1
make: Entering directory '/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/ports/unix'
Including User C Module from /home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/jpeg/jpeg
JPEG-GEN build-standard/jpeglib/jpeg_mpy.c
mkdir -p build-standard/jpeglib/
gcc -E  -nostdinc -DPYCPARSER -x c \
        -I ../../lib/lv_bindings/pycparser/utils/fake_libc_include \
        /usr/include/jpeglib.h > build-standard/jpeglib/jpeg.pp.c
python3 ../../lib/lv_bindings/gen/gen_mpy.py -M jpeg -MP jpeg -MD build-standard/jpeglib/jpeg_mpy.json -E build-standard/jpeglib/jpeg.pp.c /usr/include/jpeglib.h > build-standard/jpeglib/jpeg_mpy.c
Traceback (most recent call last):
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/ports/unix/../../lib/lv_bindings/gen/gen_mpy.py", line 294, in <module>
    ast = parser.parse(s, filename='<none>')
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/lib/lv_bindings/gen/../pycparser/pycparser/c_parser.py", line 147, in parse
    return self.cparser.parse(
           ^^^^^^^^^^^^^^^^^^^
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/lib/lv_bindings/gen/../pycparser/pycparser/ply/yacc.py", line 331, in parse
    return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/lib/lv_bindings/gen/../pycparser/pycparser/ply/yacc.py", line 1199, in parseopt_notrack
    tok = call_errorfunc(self.errorfunc, errtoken, self)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/lib/lv_bindings/gen/../pycparser/pycparser/ply/yacc.py", line 193, in call_errorfunc
    r = errorfunc(token)
        ^^^^^^^^^^^^^^^^
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/lib/lv_bindings/gen/../pycparser/pycparser/c_parser.py", line 1931, in p_error
    self._parse_error(
  File "/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/lib/lv_bindings/gen/../pycparser/pycparser/plyparser.py", line 67, in _parse_error
    raise ParseError("%s: %s" % (coord, msg))
pycparser.plyparser.ParseError: /usr/include/jpeglib.h:792:3: before: size_t
make: *** [/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/jpeg/jpeg/micropython.mk:21: build-standard/jpeglib/jpeg_mpy.c] Error 1
make: *** Deleting file 'build-standard/jpeglib/jpeg_mpy.c'
make: Leaving directory '/home/rini-debian/git-stash/lvgl-ui-detector/lvgl_ui_generator_v2/lv_micropython/ports/unix'

Do you have any suggestions or possible fixes?

I also had a lengthy and even more annoying conversation with ChatGPT about this, and that AI is too dumb for it, since it repeatedly suggested creating a fake libc, which already exists in pycparser and already includes everything I could think of.

I even used the -nostdinc as suggested by On parsing C, type declarations and fake headers - Eli Bendersky's website

So yea… I’m at a dead end with this one.

That is what you need to look at. There is something in jpeglib.h that is making pycparser pitch a fit.

libjpeg62-turbo-dev is really old… what flavor of debian are you using? Ubuntu… etc… and what version is that flavor?

libjpeg62-turbo-devis a year old and I would recommend cloning it and compiling it locally so you have a more up to date version. That version was released 1/31/23. So over a year old.

Give me a day to hammer out an example of how to compile it into a user c module. and properly link to the thing from the user c module

This is what you would be looking at for a makefile.

The trick with this is it is going to be a user c module and what you will need to do to get it to work is you will want to place the code below into a file named micropython.mk and that file need to be in a folder that is a child of an empty folder, like this.

some_folder_name/libjpeg_turbo/micropython.mk

When you use the USER_C_MODULES command when calling make be sure to use an absolute path as it makes it easier. You point it to the empty folder. Because LVGL is nested into the build system you shouldn’t have an issue with it compiling for unix. it will get more complicated if you want to compile for esp32 or rp2 as those ports use cmake as the build system. let me know if you plan on compiling for those build systems

The module that you will want to import is jpeg is there are any structures or functions that begin with jpeg_ you will not use that. jpeg_has_multiple_scans becomes jpeg.has_multiple_scans

################################################################################
# libjpeg-turbo build rules


MOD_DIR := $(abspath $(USERMOD_DIR))
LIB_JPEG = $(MOD_DIR)/libjpeg-turbo
LIB_JPEG_BUILD = $(abspath $(BUILD))/libjpeg-turbo/build

LVGL_BINDING_DIR = $(abspath $(TOP))/lib/lv_bindings

LIB_JPEG_PP = $(abspath $(BUILD))/libjpeg-turbo/libjpeg_turbo.pp.c
LIB_JPEG_MPY = $(abspath $(BUILD))/libjpeg-turbo/libjpeg_turbo_mpy.c
LIB_JPEG_MPY_METADATA = $(abspath $(BUILD))/libjpeg-turbo/libjpeg_turbo_mpy.json

CFLAGS_USERMOD += -Wno-unused-function
CFLAGS_USERMOD += -Wno-missing-field-initializers
CFLAGS_USERMOD += -I$(LIB_JPEG_BUILD)
CFLAGS_USERMOD += -I$(LIB_JPEG)

SRC_USERMOD_C += $(LIB_JPEG_MPY)

LDFLAGS_USERMOD += -L$(LIB_JPEG_BUILD)
LDFLAGS_USERMOD += -l:libjpeg.a

# Use the full path for the JPEG header
JPEG_HEADER = $(LIB_JPEG)/jpeglib.h

$(JPEG_HEADER):
	$(ECHO) "LIB_JPEG-BUILD $@"
	$(Q)cd $(MOD_DIR)
	$(Q)git clone https://github.com/libjpeg-turbo/libjpeg-turbo
	$(Q)mkdir -p $(LIB_JPEG_BUILD)
	$(Q)cd $(LIB_JPEG_BUILD)
	$(Q)cmake -G"Unix Makefiles" -DENABLE_SHARED=FALSE -DENABLE_STATIC=TRUE -DCMAKE_BUILD_TYPE=Release $(LIB_JPEG)
	$(Q)make


.PHONY: JPEG_HEADER
JPEG_HEADER: $(JPEG_HEADER)


$(LIB_JPEG_PP): $(JPEG_HEADER)
	$(ECHO) "LIB_JPEG-PP $@"
	$(Q)mkdir -p $(dir $@)
	$(Q)$(CPP) $(LIB_JPEG_CFLAGS) -DPYCPARSER -x c -I $(LVGL_BINDING_DIR)/pycparser/utils/fake_libc_include $(JPEG_HEADER) > $(LIB_JPEG_PP)

.PHONY: LIB_JPEG_PP
LIB_JPEG_PP: $(LIB_JPEG_PP)


$(LIB_JPEG_MPY): $(LVGL_BINDING_DIR)/gen/gen_mpy.py $(LIB_JPEG_PP)
	$(ECHO) "LIB_JPEG-MPY $@"
	$(Q)$(PYTHON) $(LVGL_BINDING_DIR)/gen/gen_mpy.py -M jpeg -MP jpeg -MD $(LIB_JPEG_MPY_METADATA) -E $(LIB_JPEG_PP) $(JPEG_HEADER) > $@

.PHONY: LIB_JPEG_MPY
LIB_JPEG_MPY: $(LIB_JPEG_MPY)

libjpeg62-turbo-dev is really old… what flavor of debian are you using? Ubuntu… etc… and what version is that flavor?

I’m working on WSL2 here:
Linux HACKXIT 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 GNU/Linux

PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian

libjpeg62-turbo-dev is just what’s being offered there, thus I did not think too deeply about it, since I was more concerned with the error that pycparser was throwing.

The code in newer versions has changed at the line the error is being thrown for so you might have better results using a newer version of the library.