… it also becomes easier to fix. The absolute programmer’s nightmare bugs are the ones that only occur, say, once a month, with one customer somewhere in the world, and a different customer every time.
The intent of the core dump, approximately, is that it captures the exact state of the process at the time of the failure. That may or may not be sufficient to establish the state history that led up to that state - and hence to find the bug.
There are tools available to extract useful information from core dumps. I doubt anyone actually directly reads the core dump data.
I assume that it still does. I would guess that Unix had core dumps from the very early days - given that not too many computers have “core” these days.
The actual original question was about a mechanism to make information about failures automatically available to the developer (let’s say Purism), taking into account the practicalities of running that mechanism, whether anyone would do anything with the collected information (sufficient developer resources available), and the obvious privacy challenges for the customer.
Even if limited information is uploaded, a sudden spike of failures at the same place / in the same program could be a statistical red flag, and useful information.
Ubuntu already operates something along the lines of the third option in my first post above. If I have time to scrutinise the information that will be uploaded, and consider the implications, then I will usually let Ubuntu send the report, particularly if it’s the first occurrence.
For example: If Firefox craps out on a web page that is strictly internal (perhaps some internal web application) then there are more significant privacy and security implications and it is less likely that the information will be useful to Firefox developers - so maybe don’t send v. Firefox craps out on a completely public, mainstream and stable web page (less severe privacy implications, more likely to be useful).