Intelligent touchscreen input processing / assistive technology

I am very excited to find out about your project because it seems to address a few of the things that frustrate me about current smartphones, such as lack of privacy, uncertainty about what all the OS may be running in the background at a low level, limits on customizability, etc … it also sounds like your platform might address another major issue with the current crop of phones that really drives me crazy and that is how poor a job they seem to do at handling touchscreen/pointer input events! I would really appreciate any suggestions on how I might be able to examine whatever module in your system is going to be responsible for this sort of processing (maybe some part of the wayland component?) and maybe try out some ideas for modding it in order to improve its accuracy — ie, reduce the annoyingly high rate of false positives/negatives for touches and drags I presently have to contend with on my android device, possibly by implementing heuristics that actually analyze the visible screen pixels and attempt to distinguish between meaningfully clickable regions and filler, coupled with an augmented pointer setting dialog that would expose to the user a more fine-grained set of parameters to optimize for his/her particular patterns of touches? I did not think it worth my while to invest much energy into this in the framework of the existing smartphone ecosystems given their many flaws, but your project would seem like a promising foundation that could profitably built upon, in this respect and also ideally to transcend present limitations in conventioal smartphones as regards assistive technology such as full fledged pure and off line voice control of all elements of the ui …

thanks in advance for any insights you all may wish to share

— Joseph