Oh my god, it really is!
It takes too much memory to read large-capacity font files. It is completely different from what I used before.
When using bare metal directly, the file offset is calculated using the font used, but the encoding is different. Using UTF8 encoding, the actual offset address cannot be obtained from the text to be used.
Is there any way to reduce the heap memory used? I can only think of a compromise, to split a large file into multiple small files and traverse all the files. .