Maybe an idea for some size optimizations of the generated array (indexed 4 colors):
For example: Image size is 17x30 or 18x30 or 19x30. The final array is always 150bytes (5bytes*30, without the generated indexed colors). If I see it correctly, the algorithm is doing something like: 17/4px = 4,25bytes (round up to 5 bytes witdh), same with 18 and 19px width.
Array part from a 18x30 picture:
0xff, 0xe5, 0x56, 0xbf, 0xf0,
0xfe, 0x00, 0x00, 0x1b, 0xf0,
The last lower 4bit are never used here (0xf0). So normally an image size with 18x30 would take: (18x30)/4 = 135 bytes or 17x30 would be 127,5bytes (128bytes). Or do I miss somehting here? Is it possible to simply store the 2bit pixel data in a byte one after another without the gap?