So is Purism actually going to address the refund issues or is this company a flat out ponzi?
Are they keeping the money?
I have talking with at least 4 people waiting for refunds after making a refund request…
My personal problem was detailed in one of my last posts.
Currently have an appointment with a lawyer to see what i can do because it has been 6 months since refund request and its radio silence…
i just don’t get how they expect us to be considerate…
Basically taking clients money as 0% interest loans to cover their asses and refuse to pay back so when we get it it will have 15-20% less value due to inflation.
This is some scummy behaviour.
I don’t see any efforts on their part to solve this, they just go radio silent with no explanation while they do what they want and keep the orders coming and that’s what makes me mad.
And yea i’m sure their “financial team” is already enjoying their holidays as they please.
They haven’t announce once that they are having financial difficulties. Look at their puri.sm site. There is nothing of that kind. So they have money but refuses to refund the customers. Big fraudsters!
Privacy and security have nothing to do with refunds. I probably would not trust Purism to give me a refund, but I do trust them with my privacy and security. Also, you don’t have to trust here, you can (and should) verify.
Literally no single human can read/understand/verify at least 29 million lines of code (i.e. the Linux kernal + systemd). Which means if you’re using a Linux distribution (or GNU/Linux for those fussy about the term), you’re guaranteed to be trusting the organization providing the OS. If you can’t trust a company in terms of finances, why would you trust them with your privacy and security?
Nobody has to read all GNU/Linux source code. One person only reads what they can and then they rely on the community to read the rest. Any single suspicious line in the code would make a lot of noise in the Internet, and the person found it would become famous. In addition, reproducible builds help to make sure that the executable you have comes from the known source code.
If you want to go further, consider using Qubes OS, whose security relies perhaps on 100 thousand LoC, not tens of millions.
Qubes OS is cool, but it’s still running VMs on top of it, VMs which run OSs which have the tens of millions of LoC. So you’re going to be trusting things you cannot personally verify, even if you’re using Qubes. Getting back to the original topic of trusting Purism, however, the idea that you can trust a company in terms of privacy/security when they have consistently broken trust in other areas is…at the very best, strange.
It doesn’t run VMs “on top”. It relies on hardware virtualization to isolate different VMs from each other. Escapes from that are practically impossible. You keep important stuff in offline minimal VMs and never even run anything in them.
They seemingly have problems with funds. They however never broke any trust in their hardware and software, which is in addition verifiable.
I’m a systems engineer who creates and manages virtual servers in corporate environments. I’m quite aware of how virtualization technology works. Regardless of what terminology you use regarding whether the VM runs “on top” or with “hardware virtualization,” the OS running in the VM has millions of LoC. It may indeed be completely and perfectly isolated from everything else on the computer, but if you operate within that VM that’s running an OS that has tens of millions of LoC, you are trusting the people who coded the OS, because you have not personally read/understood/verified everything happening within that OS.
First, as I noted above, you are not trusting just the developers, but the whole community of users who read the code, if you rely on the reproducible builds.
Second, Qubes relies on security through compartmentalization. You can use different operating systems in different VMs, so, if you isolate your workflows into domains, in order to break your security, an adversary must break them all. You ultimately rely on the security of dom0, i.e., Xen with much less code, to make sure everything cannot get broken at the same time. Security is never perfect, but it can be reasonably good.
As I mentioned before, I think Qubes OS is cool, although I haven’t (yet?) switched to using it. That said, even with compartmentalization, when I open a VM that I only use for banking, and I enter my banking credentials into the bank website to let me log in, I’m trusting that the OS from which I’m working is not doing something that I’m not aware of (e.g. communicating something to somewhere else on the Internet). You can make steps to have better security practices (and we should), but the free software talking points about how “you can verify everything yourself!” just isn’t true. That said, we don’t have a whole lot of better options with how reliant our world has gotten with complicated technology, so I’m still pro-FOSS software.
I would agree that you’re trusting both the people whom wrote the code as well as the unknown people that reviewed the code. You’re trusting that they’ll report their findings instead of keeping the findings to themselves (every clandestine agency ever). You’re also trusting that there are a large number of people actually reading the code and not just also trusting that you’re part of the number of people reading it. I’m not sure which of the two scenarios apply to log4j, but you can see with that how open source is not a magic bullet and you’re putting trust in others.
The point that everyone has access is important because it allows for review and contribution; however “allowing for” does not mean it is feasible for the individual nor that it is actually happening with the collective.
This is in principle true, except you can also use the firewall, which is isolated and, potentially, relies on an entirelydifferent OS, to improve your security further.
This is absolutely not what I am talking about. I say that when a lot of people read the code, any suspicious line in it would attract a lot of attention, leading to a very big incentive to reveal a backdoor. This is not a perfect scheme but it most definitely improves the security and should not be forgotten.
Specific examples do not prove (or disprove) this theory. Of course some lines can stay hidden inside the millions of LoC, even if they are accessible to everyone. However, in addition to that, if you care about security, you can make a crowd-funded, independent security audit.
So you don’t have to trust, you can and should verify, but not everyone is capable of that and things slip through so you can crowd fund an independent security audit, that you don’t actually know is independent since those independent people may have already reviewed the code, found something, and chosen not to disclose; on top of that you would be trusting the people performing the audit, so we’re back to the original point of having to trust.
On top of that, in the case of PureOS you have to trust that Purism is making the correct choices when merging/not merging requests.
With all of that said, and to circle back a bit closer to the original topic,
You don’t have to run their software and you can review the hardware more easily than the majority of their competitors making it so you don’t have to trust Purism for privacy/security you only have to trust that they will provide goods for funds which they do seem to eventually accomplish. If your timeline doesn’t permit allowing for an irregular and unknowable timeline then I wouldn’t recommend any Purism product currently. Maybe in the future they’ll get ahead of their fulfillment issues, but for now that is an issue they are struggling with. And communication is not their strong suit.
Yes, we indeed need some trust in the end, but it’s an entirely different situation, when you are trusting a large number of independent entities and random people, instead of trusting a single company (as it is typical). And you can verify to some degree.