Touchscreen calibration and porting to version 8

Sorry but I have been sidetracked by other tasks recently. I’ll return to the screen calibration soon now.

The OpenUI user interface management system (UIMS) was the first product from Open Software Associates, which was the very first dot-com-dot-au, of which I was inventor and co-founder in 1990. It used a virtual widget set in the basic toolkit (called the Equaliser) that mapped each widget to its equivalent - on OSF/Motif, on Windows (16 bit and NT), OS/X, MacOS and even on character terminals. The Equaliser widgets were in an abstract hierarchy completely defined by property values, so not just the layout definitions, but the entire runtime state of a UI could be captured by enumerating the hierarchy and saving all non-default values for the properties. The exact same UI could be reconstructed by constructing the same hierarchy and setting the same values, right down to focus widget and cursor position. The reconstruction could even be on a different GUI environment from the one it was captured in. So I think you’ll see why I prefer property values that have been set, to be able to be retrieved again!

The OpenUI Equaliser was deeply integrated into an O-O compiler-interpreter language (very like Java in design, but some years prior) that used message-passing to create point-to-point and publish-subscribe rich application messages to activate everything. Messages naturally “bubbled” up through the hierarchy until most of them would cross the network to a back-end server application, so everything from a mouse-motion to a “list all matching records” message was integrated into a single simple framework.

The largest application written in OPL (the OpenUI Programming Language) was the IDE, which directly manipulated live widgets in design mode. The language had full tracing and single-step or breakpoint debugging. The output of the OPL compiler was machine-independent “dictionary” files that were shipped and updated transparently to the client side for display, regardless of the target display hardware. The only proviso was that the vector-based graphics were not attempted on character-mode terminals! Client companies however were known to have ported large applications (e.g. 600 screens in an IT help desk for a Telco) from Windows NT to MacOS in a single morning - with a result that was ready for production… much to the amazement of everyone including ourselves (even though that’s exactly what the product was designed to do). OpenUI was used to deliver world-first Internet share-trading and online banking in some of Europe’s largest banks, and was also used to build the new trading system deployed at NASDAQ after they combined with the London Stock Exchange to form the first intercontinental linkage of capital markets.

All in all, by 1992, OpenUI had delivered everything that Java was still promising in 2000, but did not actually deliver until almost 2010. Sadly the business was forced to pivot and the product was lost.

Now, back to LVGL, and what this means. I would like to build a markup language for LVGL, akin to OPL, or in your terms, to a merger of the capabilities of HTML, CSS and JS - but in a single unified language. To achieve this, I will need to define all the attributes of all objects, and ensure that the values are stable - when set, they stay set! Any dynamic computation is internal, applying to non-settable (perhaps even invisible?) internal attributes.

I have a syntax to propose for this markup language, but before I show you that, I’d like to hear your feedback so far.

I didn’t know OpenUI so far but it sounds really great! A markup language for LVGL would be amazing. It’s no really important to make it HTML-like. It might be even better to have custom language (especially at first) to not bind ourselves to an actually unrelated concept.

So if you have already had some ideas about it, please open an issue on GitHub and let us know :slight_smile:

Firstly, apologies for pushing this discussion thread into a set of generalised points about LVGL and get/set attribute semantics.

Secondly, sorry for not pushing my touchscreen calibration work further. It does actually work with LVGL v8 - which was the biggest part of the work - but it was my first LVGL project, and have issues (see below) with the way the original code worked anyhow. I have been working in many other areas of knowledge and have not needed to complete it. You are welcome to contribute of course - it would be good if someone published a standard way to save calibration to NVS for example.

Next, the touchscreen drivers still have no standard way to access un-transformed coordinates, or to set a new transformation. Someone announced that they planned to investigate how to do that (so I didn’t commence it), but I haven’t heard anything more. Perhaps this is an area where you could do some investigation?

Finally, at least on the touch-screens I am using, the existing code is not nearly flexible enough. I find I get incorrect coordinates fairly often, so it should be possible to retry any calibration coordinate that doesn’t seem close enough. Not just a single pass through with ok/cancel at the end, but the ability to retry any coordinate. Given that a calibration occurs when the current calibration is inaccurate or approximate, I’m uncertain whether or to what degree we can rely on ok/cancel buttons, etc, given that the transformed coordinates will be off the mark. Any thoughts on that?

Quick bump on this thread…
@frankbuss is there a repo with the calibration code available? And the video seems like it’s gone - would love to see how it works…
@embeddedt was Frank’s code example ever added to LVGL repo’s/docs?

I don’t believe the example ever ended up being added, unfortunately.

My version of the code is still available at the URL given earlier. It started as an update to use the v8 API, then I refactored a large amount of duplication. But it still lacks the ability to get the raw numbers from the driver, which is necessary but out of scope for me. That needs to be fixed before the code can be properly fixed. Based on my testing, it also needs the ability to retry, as my screens give sporadic bad locations and you only get one try at each corner.

I’d like to link my work here, hoping it might be useful. I wrote up this calibration screen for LVGL v8.3 which can be integrated into an existing application:
jakpaul/lvgl_touch_calibration

It contains a custom input device driver to get the raw panel coordinates from the touch controller and interfaces with non-volatile storage to save the calibration coefficients.

1 Like

Very nice work @jakpaul! Thanks for your contribution, especially the recalibration after timeout - that issue was a worry to me. I’ll be happy to use it when I next get time to wrk on LVGL stuff

1 Like

@jakpaul this looks great!

I took a slightly different approach which yielded very good results - instead of creating a new callback for the calibration process, I just used a screen size object, captured the raw touch coordients on a point click and then at the end fed it through the algorithm.

Saved the output to EEPROM and read it on startup - works a charm

Thank you @reso.

This was my first approach too, but I found the custom driver approach to be a a bit more robust, since the output of my touch controller was out of range of the screen object and no touch would register on it.
I would have had to prescale the output first for this to work.