1-byte-per-symbol font + font-awesome symbols

I am using ESP32 and LCD display.
I want to make an application which reads text from SD-card (windows-1251 - encoded text, it is important!), displays it and after editting write to SD-card in windows-1251.
So, I have created my font which contains most letters on positions from 192 to 255 (and some are on others positions, for example letter Ё is on the 168 position) and it does not work because in lv_conf.h it is written


So, I changed this string to


And now I can read or write cyrillic letters. But when I want to add icons from font-awesome, they does not work. What’s the matter and how can I solve the problem?

Maybe the problem is that you are using an 8-bit character encoding, but the font-awesome needs a 16-bit value to select the right icon/glyph/symbol?

Maybe. But when I use 16-bit value (LV_TXT_ENC_UTF8) something strange happen:
If I try to display char number 128…255 (а, б, в, г, д etc), it does not work but the same font works great when I set LV_TXT_ENC_ASCII
I can display LV_ICON_… only when I use LV_TXT_ENC_ASCII

In my font I set 63 cyrillic letters like this:

    .range_start = 192, .range_length = 63, .glyph_id_start = 97,
    .unicode_list = NULL, .glyph_id_ofs_list = NULL, .list_length = 0, .type = LV_FONT_FMT_TXT_CMAP_FORMAT0_TINY

How is your source file encoded?

Which one? Sketch .ino is UTF-8, the file that I read/modify is in win-1251

The main question is:
Why I can not define u+401 char as char u+a8? (168’s char)?
I can do it but use only when define

I meant the c-source file with the strings to be displayed.
If I understand correctly you read all strings from SD card.
The string/character is a 8-bit encoding (windows-1251).
You want to show these characters (8-bit values) on display.
But the font/glyph table may need a 16-bit value to address the correct glyph/icon/symbol.

Normally you have text/string (from a file or direct c-string encoded as utf-8).
utf-8 is a encoding which allows working with 8-bit (byte) values.
But unicode is a 16-bit value (to allow to encode all worldwide used languages)
In utf-8 a 16-bit (unicode) value is made into a one-, two- or three byte sequence.

For selecting a glyph from a font/glyph table with a 16-bit value the utf-8 sequence is converted back to a 16-bit value.

The most common to work with strings from a file in whatever encoding:
Convert the strings into utf-8 encoded strings.
Display and process the utf-8 string as you want, and for saving the changed string to file again
convert the string back to the original encoding.

For encoding and decoding characters/string you can use iconv.
See: iconv - Wikipedia

I think, I am close to solution.

I found out that I am able to set the range like this

static const lv_font_fmt_txt_cmap_t cmaps[] =
// from 32 till 127
.range_start = 32, .range_length = 95, .glyph_id_start = 1,
.unicode_list = NULL, .glyph_id_ofs_list = NULL, .list_length = 0, .type = LV_FONT_FMT_TXT_CMAP_FORMAT0_TINY

// --------------------- 168'th symbol ----------------------
    .range_start = 168, .range_length = 1, .glyph_id_start = 96,
    .unicode_list = NULL, .glyph_id_ofs_list = NULL, .list_length = 0, .type = LV_FONT_FMT_TXT_CMAP_FORMAT0_TINY


So, I can set cyrillic letters and symbols from font-awesome both. But I can not set some ranges and I don’t know this ranges.

Finally I did it. The code works great, so, I placed font-awesome symbols, latin letters and cyrillic letters in 255 (1 byte) but the problem was with letter ё and Ё. When I placed them both, other letters did not work. I don’t know the reason, but now everything work without theese 2 symbols.
The next step I need is to re-define symbols such as LV_SYMBOL_BACKSPACE
Where should I write


? I think, it will not work if I place it to .ino file. Maybe .c file of the font?

I did it, but now I think it will be better not to modify font.c file, but write encoding and decoding function.