This has been a busy week for security news, but perhaps the most significant security and privacy story to break this week (if not this year), is about how NSO Group’s Pegasus spyware has been used by a number of governments to infect and spy on journalists and activists and even heads of state by sending an invisible, silent attack to their iPhone that requires no user interaction. This attack works even on new, fully-patched phones, and once the phone is compromised, the attacker has full remote control over the phone including access to the file system, location, and microphone and cameras.
What’s particularly scary about spyware in general, and is true for Pegasus as well, is that victims have no indication they’ve been compromised. Due to how locked down the iPhone is from the end user, detecting Pegasus in particular requires expert forensics techniques. This has left many at-risk iPhone users wondering whether they too are compromised and if so, what do they do?
The infosec industry is prone to ambulance chasing. After every major security incident you can count on your Inbox filling with emails from vendors claiming they could have stopped it. As a result I typically wait weeks if not months after a security incident to publish my thoughts so I can avoid even the appearance of ambulance chasing.
However, we have had customers ask us about this incident and whether our hardware would be vulnerable, so instead of writing a lot of individual replies, I figured it was better to go ahead and publish something on how we approach defending against spyware in general. Even though Pegasus doesn’t work on our products, our defense would apply to it and any other spyware that was ported to our platform.
Read the rest of the article here:
There’s a broken link in your article (“According to one research paper”). I understand that that is not your fault and that there is not much you can do about their ?misconfigured web server - unless you can find an alternative URL for that research paper.
You can solve using Internet Archive Wayback Machine:
Odd, it was working when I published the article (and I linked to it in my Snitch article I reference too). A 403 seems to indicate they decided to take it private.
Or they just stuffed up.
Adding: If this was intentional then it does raise questions as to whether any kind of “influence” was exerted.
The coverage on the netsec side of the house is really strong and I think that L5’s traits serve it well there; how about Mallory when it comes to file system level changes. What recommendations would you have there? For example, is running SELinux or AppArmor advised in this scenario? Can you include your thoughts there?
Love the OpenSnitch inclusion, btw.
The list of classic Linux file integrity tools can help with file system level changes (tripwire, ossec and friends) if you wanted to go that route. Alternatively you end up with wrapping things in sandboxes (like bubblewrap either with or without flatpak) and hope malware can’t break out.
Apparmor, like bubblewrap can help with restricting what malware can do once it gets local privileges (if the malware can’t escape), but a lot of that comes down to the quality of the rules you put in place to restrict a particular app. Of course some apps are riskier than others in this regard (web browsers, email clients, other things that process files from the outside) and you could give more focus on those and tighten their rules up.
I’ve been working on a side project to create a set of bubblewrap wrappers for some of those common apps, with the idea of providing both a sandboxed and a “disposable” version of apps. https://source.puri.sm/pureos/packages/sandbox-apps is still somewhat beta but I hope to flesh it out more soon.
Exactly my thoughts. Have they been tested on L5? I imagine they wouldn’t be too resource intensive…