How can we use the QSPI interface PSRAM to help expand the image decoding and other memory hungry application?

Dear Sir:
currently I am using a RISC-V Chip with QSPI interface PSRAM. And I Can use memory mapping to use it directly.What I want to increase the memory for drawing canvas ,GIF decoder,PNG decoder etc…
How can I use it? when I assign it to the like canvas buffer,it is actually doesn’t work …the PSram should be the static memory ,and I saw a lot of dynamic memory create/free…
Pls help to solve this…Can I use it as the lvgl drawing buffer?

Thanks

Hi,

For canvas I don’t know what can be the issue, probably it’s something HW specific. The buffer of the canvas is really just a static memory block where LVGL writes some colors.There is no DMA, or other magic here. What do you exactly mean by “doesn’t work”.

For draw buffer using DMA in flush_cb might be an issue as it’s not sure that PSRAM is DMA-able.

LVGL’s malloc/free actually also works on a static memory block in the background. You can set a custom malloc/free in lv_conf.h. In v8 there are some more options:

Thansk for reply…the PSram have dma support…but as I need use the memory mapping directly, it will directly use the psram as virtual memory address…so it can’t use the DMA .

but the way ,I found for this chip…even I use internal memory for canvas ,still doens’ work…but drawing other stuff no issue…as you said ,issue maybe come from the flush_cb

“doesn’t work” means even I set buffer of canvas like 30*30 …still can’ see the area on screen…I guess something wrong with lv_img_set_src or lv_obj_invalidate(canvas);

thanks

Can you share a code snippet about how you use the canvas widget?

it is every simple , below is the code:
lv_obj_t* canvas = lv_canvas_create(lv_scr_act(), NULL);
if(canvas == NULL) return NULL;
lv_obj_align(canvas,NULL,LV_ALIGN_CENTER,0,0);
lv_canvas_set_buffer(canvas, buf, 30, 30, LV_IMG_CF_TRUE_COLOR);

lv_canvas_fill_bg(canvas, LV_COLOR_BLUE, LV_OPA_COVER);

it is very weried ,I traced lvgl gif 7.0 ,it is very similar code like the canvas except the decoder.
but it can display well,below is the code:
lv_obj_t * gif_file = lv_gif_create_from_file(lv_scr_act(), “F:test.gif”);
lv_obj_set_size(gif_file , 360, 360);
lv_obj_set_pos(gif_file , 0, 0);

pls help for this

How do you create buf?

uint8_t * buf = lv_mem_alloc(1800);
#if LV_VERSION_CHECK(6,0,0)
lv_mem_assert(buf);
#else
LV_ASSERT_MEM(buf);
#endif
if(buf == NULL) return NULL;

BTW ,I added some log in the disp_flush

static void disp_flush(lv_disp_drv_t * disp_drv, const lv_area_t * area, lv_color_t * color_p)
{
printf(“disp_flush: (X1=%-3d, Y1=%-3d), (X2=%-3d, Y2=%-3d) [W=%-3d, H=%-3d] Color=%04x @=> %x\n”,
area->x1, area->y1,
area->x2, area->y2,
area->x2 - area->x1 + 1, area->y2 - area->y1 + 1,
color_p->full, color_p);

lcd_drv_t *lcd_drv = (lcd_drv_t *)disp_drv->user_data;

Looks like the canvas code not trigger it?

If it worked with LVGL’s malloc and SRAM I don’t think it’s a canvas invalidation or LVGL logic issue. :slightly_frowning_face:

However with printfs you can trace if lv_obj_invalidate is called if you do something with the canvas.