Hardware: nRF52840 DK, Sharp LS027 monochrome 400x240 display
Software/Tools: LVGL 8.2, Zephyr, nRF Connect SDK
I want to simulate an on-screen cursor controlled by an encoder (basically replacing the focused object outline, but jumping between objects the way a cursor might). Since the objects aren’t evenly spaced across the screen, I don’t think I’d be able to infer where the “non-target” coordinate belongs upon each animation callback. I successfully solved this by creating two of the same cursor object directly overlapping, setting two animation callbacks (one for x, one for y), and updating the x or y coordinate for both objects in each callback.
static void anim_cb_x(void* var, int32_t v) {
lv_obj_set_x(cursor_obj1, v);
lv_obj_set_x(cursor_obj2, v);
}
static void anim_cb_y(void* var, int32_t v) {
lv_obj_set_y(cursor_obj1, v);
lv_obj_set_y(cursor_obj2, v);
}
It’s a hacky solution, though. Are there any alternative options? Is this a valid approach for my use-case or would it be too strenuous on the CPU?