Is there a service that I can start if I want my Librem 5 to automatically upload core dumps to some portal where developers can see the frequencies of most common problems? There is this functionality in iOS and Android, and I think that it would be a really good information for developers. I have previously known about a Linux project that did this (many years ago), but now when searching for it I couldn’t find anything. Does anyone know how to accomplish this?
It would be great if this could be enabled via an option the the settings app under privacy. Default should of course be to not share data.
uploading the bare minimum (name of running program, type of failure e.g. SEGV, address of failed instruction, where relevant the address that it failed to access and the type of access)
uploading some kind of more comprehensive trace (the previous + name of all loaded libraries, full call stack)
The challenge is in getting informed consent. Emphasis on informed. It is difficult to say what the privacy exposure is without tying down the details. For example, if the mail client craps out, is it acceptable if pieces of or all of an email leak out in whatever is uploaded? That could be the consequence of supplying a core dump.
Yes, probably a core dump is to much, I was actually thinking more about a stack trace. A stack trace helps most of the times quite much to find the problem. It is does also not contain any private information, as long as variables on the stack is not included.
The thing about core dumps is no one actually reads the data. Techs would go right the memory address shown by the stack trace and read the nearby hex. If any data is there, there isn’t much on the right hand ASCII column of a screen. Maybe just 12 or 16 chars wide on a few lines with mostly nulls? I forget.
Most of the time a system would report the program trying to access an “illegal address” due to a file conflict. The techs would not actually fix things, just report and tell the customer “don’t do that again”. e.g. Performing a online backup during busiest hours could do that. (File locking and flag setting isn’t always perfect or timely.) Then there was the dreaded file “deadlock” when two programs were waiting for an “I’m no longer busy with that file.” flag from each other that never comes.
Then again if an error becomes too frequent from too many customers, then it gets finally tossed over the wall to the programmers for a real fix.
Do you mean that no on would use this data? I’ve worked on other mobile OSes for 10 years, and probably looked at thousands of stack traces of crashes. At least I find them very important and useful. A core dump always shows that there is a problem, but they may not always reveal what the problem is.
The original question was whether this exists for Linux or not? I think it should, and that it would help developers (at least those that write programs in languages that can crash).
… it also becomes easier to fix. The absolute programmer’s nightmare bugs are the ones that only occur, say, once a month, with one customer somewhere in the world, and a different customer every time.
The intent of the core dump, approximately, is that it captures the exact state of the process at the time of the failure. That may or may not be sufficient to establish the state history that led up to that state - and hence to find the bug.
There are tools available to extract useful information from core dumps. I doubt anyone actually directly reads the core dump data.
I assume that it still does. I would guess that Unix had core dumps from the very early days - given that not too many computers have “core” these days.
The actual original question was about a mechanism to make information about failures automatically available to the developer (let’s say Purism), taking into account the practicalities of running that mechanism, whether anyone would do anything with the collected information (sufficient developer resources available), and the obvious privacy challenges for the customer.
Even if limited information is uploaded, a sudden spike of failures at the same place / in the same program could be a statistical red flag, and useful information.
Ubuntu already operates something along the lines of the third option in my first post above. If I have time to scrutinise the information that will be uploaded, and consider the implications, then I will usually let Ubuntu send the report, particularly if it’s the first occurrence.
For example: If Firefox craps out on a web page that is strictly internal (perhaps some internal web application) then there are more significant privacy and security implications and it is less likely that the information will be useful to Firefox developers - so maybe don’t send v. Firefox craps out on a completely public, mainstream and stable web page (less severe privacy implications, more likely to be useful).
While I’m not a big Ubuntu fan, I also wanted to point in that direction. And to their credit, it’s never automatic. You’re asked whether you want to submit it. And I think it even checks a database on whether that particular crash is already known. Here’s some info on how to extract info from such a report and what tool it uses (apt-ly named apport).