I use lv_font_conv to convert the TTF font file into a binary file, and I get the wrong display in the simulator. The specific commands and results are as follows:
lewis@ubuntu:~$ env DEBUG=* lv_font_conv --font BenMoJingYuan/BenMoJingYuan.ttf -r
0x20-0x7F --size 48 --format bin --bpp 3 --no-compress -o output.font
font last_id: 96 +0ms
font minY: -13 +1ms
font maxY: 44 +0ms
font glyphIdFormat: 0 +1ms
font kerningScale: 1 +0ms
font advanceWidthFormat: 0 +0ms
font xy_bits: 6 +0ms
font wh_bits: 6 +0ms
font advanceWidthBits: 7 +0ms
font monospaced: false +0ms
font indexToLocFormat: 0 +11ms
font subpixels_mode: 0 +0ms
font.table.head table size = 48 +0ms
font.table.cmap 1 subtable(s): 0 "format 0", 1 "sparse" +0ms
font.table.cmap table size = 28 +0ms
font.table.loca table size = 204 +0ms
font.table.glyf table size = 31692 +0ms
font font size: 31972 +4ms
I think this warning message might also apply to binary files being loaded by LVGL, but since the binary format is intended to be generic, it does not display the warning.
Another problem is that if I choose to generate font binary files in a wide range, then Segmentation fault (core dumped) will appear directly in the simulator. Here are the steps to perform.
env DEBUG=* lv_font_conv --font BenMoJingYuan/BenMoJingYuan.ttf -r 0x20-0x7F -r 0x4E00-0x9FA5 --size 48 --format bin --bpp 1
--no-compress -o myfont.font
font last_id: 4617 +0ms
font minY: -13 +2ms
font maxY: 45 +0ms
font glyphIdFormat: 1 +1ms
font kerningScale: 1 +1ms
font advanceWidthFormat: 0 +0ms
font xy_bits: 6 +2ms
font wh_bits: 6 +1ms
font advanceWidthBits: 7 +1ms
font monospaced: false +0ms
font indexToLocFormat: 1 +149ms
font subpixels_mode: 0 +0ms
font.table.head table size = 48 +0ms
font.table.cmap 10 subtable(s): 4 "format 0", 6 "sparse" +0ms
font.table.cmap table size = 9044 +19ms
font.table.loca table size = 18480 +0ms
font.table.glyf table size = 1074052 +0ms
font font size: 1101624 +117ms
Oh my god, it really is!
It takes too much memory to read large-capacity font files. It is completely different from what I used before.
When using bare metal directly, the file offset is calculated using the font used, but the encoding is different. Using UTF8 encoding, the actual offset address cannot be obtained from the text to be used.
Is there any way to reduce the heap memory used? I can only think of a compromise, to split a large file into multiple small files and traverse all the files. .
My idea was to load only the top font information, and the starting block number of the binary information (on SDCard or eMMC), and read the real glyph data when it is needed. But that would need some rewriting of lv_font_load.
I have a STM32H743 board and reading all the binary data from file into heap took a lot of time (trying to load chinese font, with a lot of glyphs), and needed a lot of memory.
So reading the glyphs on the fly would be the only reasonable thing to do.
Reading a block from SDCard needs about 1 ms (on my board).
If you have fixed text only, you can only included the needed glyphs into binary font.
I may not understand the lvgl font structure, what you said is correct. Although what I said is feasible, it is not very practical because it cannot return all the given characters.
I will study it carefully. Thank you for your reply!
It should, but current bin support is temporary kludge. Instead of direct load, bin is converted to “lvgl” in memory. I guess, current code does not take into account lvgl format limitations.
It was suggested by volunteer to support dynamic fonts loa (from file system), and merged as “better than nothing” to motivate rewrite.