Last night I was at the NY Philharmonic and I noticed some really strange behavior with my Calyx de-googled device. I assume that it was because of the lighting in the hall, but I don’t know and would be interested in comments from people. Once I had logged on, VLC suddenly opened and then the device logged itself out and I had to log in again. This repeated several times. The only thing that I could think of is that because the device can respond to ambient light to change the brightness of the screen, the lighting of the hall was ‘confusing’ it. If true, it might be a way of hacking a device.
Sure, a camera or any other kind of light sensor creates a form of input, and as such could in principle be used to hack the device. It all depends on what the operating system is doing with that input.
For example, one can imagine a device that continuously looks for QR codes always, because someone thought it would be more convenient for the user to not need to open a specific app for that. The user would just point the camera at a QR code at any time, and the device would process that QR code automatically and try to do something useful with it. Then a hacker could exploit a bug in that processing to hack the device by simply holding the hacker’s special QR code in front of the camera.
Oh, shame on you for trying to use your phone at a classical concert - one of the few places where this kind of protection from technology seems prudent (and they clearly have it on their web page that your phone should be turned off). Yes, this is logically the place to do field testing of new hightech phone jamming/hacking. They even have history about this (you could have ended up in national news - search or see YT). Is suppose tuxedos of classical lovers need to be searched soon, just to be sure nothing disturbs the atmosphere - old music requires old timey experience (only telegram [no capital T] messages are allowed).
Seriously though, the idea is interesting. Since there’s no notification of such measures against phones,I doubt this was it, but it’s not out of the realm of possibility. The QR automation is one. I’ve also read that DNA analyzers potentially had similar threat researched - certain types of samples could be used to attack the machine (this was some years ago).
But looking at just this concert scenario though, the aim would probably need to be two fold: prevent calls (were the signal blocked?) and prevent photo/recording (and flash). Maybe even three-fold: the annoying flickers of screens too. It would be a seriously heavy tech solution just to prevent those from happening (even for music lovers) - at least here they’d be illegal, since those kind of measures would prevent communications and calling emergency numbers etc.
And going back towards even less likely (for now at least - who knows about the future) again: if someone actually had found a way to flicker light at specific intervals and/or colors to create codes that effect mobile devices (there is already data transmission specs using such method), it would be… a superpower? As in, the power to prevent the use of all devices. Or even worse, prevent the use of certain devices: “pay 9.99/month to not get interrupted”. And even worse: “pay 29.99/month and we won’t force you to see adds”. Luckily with linux we could at least control how or device would react to such input.
(Sidenote: don’t we have the general security tag [area] anymore?)
We still have: Privacy & Security - Purism community (General / Privacy & Security)
It was before the concert and during the bows when devices are permitted. I imagine that I would have seen the issue during the concert had I tried to use the device, but I was enjoying the performance.
This reminds me of a episodes from “Person of Interest”.
Don’t forget static devices also!
With the caveat that I have exactly zero knowledge of that environment …
What applications would ordinarily start at boot-up? at login? Were you booting the device or just logging on? Or just taking the device out of “flight mode”?
What file types (media types), if any, are associated with VLC?
Are you sure you weren’t doing some kind of “butt dial”? (or whatever that accident might be called in your locale)
It might be some kind of hack anyway i.e. independent of any speculative explanation about getting confused by the lighting.
My device is set to lock screen after 30 seconds. I was unlocking the device in order to look at Twitter/X while waiting for the performance to begin. The funky behavior began as soon as I unlocked the device and vlc opened and started scanning for music and videos. The device would lock itself. I repeated the process several times, closing out of vlc each time and pretty much the same behavior resulted.
When I went outside the theater the device behaved as normal.
Reflecting on it, I recalled that I have adaptive brightness and this suggested to me some sort of ‘buffer overflow’ of that code or something substantially similar, hence the question about hacking.
In this case, I think Hanlon’s Razor says that it’s a bug not a hack but see caveat in previous reply.
If it’s reproducible, you might be able to get to the bottom of it.
If it were a hack then it is quite a good place for an en masse hacking i.e. large public gathering with lots of phones in range for hacking.
I agree that this was not a hacking attempt, but the point of my question was, having seen how the light was able to disrupt my device and given that almost all mobile devices have cameras could this be a way to hack devices en masse.
The microphone/camera hardware kill switch on my Librem 5 USA is enough for me.
If it’s reproducible, you might be able to get to the bottom of it. Only then would you be able to assess what security exposure, if any, there is.
I took the implication as that it was using the ambient light sensor, not the camera. So you would need to use all three kill switches, in order to disable all the sensors as well.
Sure, Lockdown Mode is the default state most of the time.
I suppose I have to go to another performance!
To hack a phone via camera (or ambient light sensor) is not easy and in fact, it makes no sense.
What kind of data can be put in the camera? RGBA-data - that’s it (including some for human non visible colors). In fact, the amount of data is much larger than a keyboard or mouse or any other similar device can bring into a system. But software has to interpret the input.
On ambient light sensor the L5 can interpret it as “increase/decrease background light”. So all you can “hack” is making display dark or light. There is no other way you can hack this. Even if you create a course code, it cannot interpret the message to do some harm. Same for cameras. As long as there is no code on phone to access the device like a keyboard, the no color signal in the world could hack your device.
So the hacker already need to have access to your phone to hack it via lightning signals. But if he already has access, why he would want to double access with such difficult methods like using a camera as input device? It’s like you already have a ticket to fly between amerika and europe, but you use a paddle boat to cross the ocean. Nobody would even try.
There is just one thing left: apps which interpret QR-code. If they to stuff that people cannot control, a hacker could hack your phone via QR-code (and similar stuff). In this case I would say that the QR-app itself is already a danger.
But on L5 the camera app is dump. It can interpret the QR-code data, but the amount of actions are very limited. So even here: the biggest harm could be that you follow the link to a malicious webpage. But that’s more like to get a malicious e-mail and actively follow the given link.
The biggest advantage you may can “hack” is the quality of a photo if you can irritate the phone on auto settings for focus, balance and so on.
I think the theoretical bug here would be: the light triggers an existing bug in the code and the bug would have to be serious enough that it can be exploited.
Whether such a bug could allow “hacker input” is even more speculative.
I have no idea exactly how such an exploit might work and whether it is even possible. I do know however that, in the past, rather far-fetched and unusual, even extreme, ideas have been turned into viable exploits.
Just as a wildly speculative idea … suppose a camera app scans for QR codes without user interaction and suppose the user can pre-configure what schemes are acceptable in a QR code (e.g.
http(s): might be configured as OK but
sms: might be configured as unacceptable). Suppose using some weird light-modulation technique a QR code is able to trigger a time-of-check v. time-of-use bug.
That would be something unacceptable for me. You explained the reason very well.
Keep in mind that different software has different security level. A software that cannot do much, cannot harm much, even through heavy bugs. Software that can do harm need to have the ability already implemented in any way. And just to have the ability doesn’t mean that it can be used to do harm.
For example QR-codes can be read by thousands of apps. How would you try to hack most possible amount of people via QR? Looking for QR-scanner-bugs? I don’t think so. This way you would have incredible amount of work with just a little success rate. You just link to a web page and try to make use of browser bugs. Most people are using Chrome for mobile phones, some for Safari. But this is just a simple ransomware-attack and has nearly nothing to do with camera or ambient light sensor. The hack is the same as spam mails with malicious links.
You see, the danger of hacks that uses camera or ambient light sensor is really really low - and even lower for apps like on L5 which are just dumb tools and which software is not really common. Even if someone would like to hack a very specific target, it’s much easier to do it in another way (W-Lan, Bluetooth, browser or even USB or another hardware modification).
My original question was motivated by an analogy to buffer overflow errors used in certain hacks. But, if there is good containment of the app, as has been pointed out, it should not be a problem.
I think that misses the hardware side and some interesting speculation: some (unknown) hardware bug or feature gets activated in certain conditions (like with some light wavelengths or light transmitted code) - as in (very theoretically), the sensor output would be out of normal range, leading to malfunction in subsequent data handling, leading to system fault. This could be (in order of likelyhood, from maybe almost in the realm of possibility to the very very unlikely) a fault in an individual camera sensor module, in a batch, in a version/model, in all of some manufacturers products or, if it was part of standards/requirements, in all camera modules (eventually).
Maybe this didn’t even have anything to do with the camera and was just a happenstance…?