You do it because it makes an attacker’s life harder because now I have to find two bugs instead of one.
The entire boot chain of the phone up to the apps you run are verified successively by the component that loads it. A digital signature helps ensure that only trustworthy code ever runs. A bug must be found to bypass these checks to load malware code. For example, a bug in the image code in a web browser might cause loading of code that isn’t checked. This way the malware gets smuggled onto the phone.
This means that if you get hacked via one bug and malware is loaded, the attacker has to work harder to solve the problem of how do I convince the phone to load it again at boot because the code it’s made of isn’t going to be approved code. When you reboot, you are effectively forcing a validation that all the code you have running is authentic, which would exclude the malware. Trick me once sure, can you survive a full pat down? Probably not. It’ll get caught.
Unless I have a second bug to fool the normal code loading systems too, the malware can’t run. You have to go back and trigger the first bug again somehow, which places more strain on the attacker.
Personally, I restart mine maybe once a week. No need to go crazy with it, but it helps make life harder for bad actors and might make your phone run better.
Exactly, as you already explained in detail this is primarily for security.
GrapheneOS has a feature to set a time after which the phone reboots in case there was no unlock. So in case a bad actor gets your phone they only have that time with a running system after the first unlock. However, if you use it normally, and unlock it in regular intervals it does not auto-reboot. This is especially neat if your threat level is not “investigative journalist” or “political activist on the run”, because then you can set the time to a longer interval and the phone does not reboot every night when you are asleep which also leads to the SIM card being locked and nobody being able to call you…
I remember this feature, and I wish it was a standard Android feature. It sounds like it would be trivial to implement and could be completely optional.
But that only works for untrusted code escaping a sandbox, right? It does not help with malicious code embedded into legitimate seeming apps. The later vector seems easier, especially on Android, no?
I don’t really consider a malicious app to be an exploit. In this case, the software is doing exactly what it was designed to do – malicious activity. It’s not being manipulated to perform unintended operations through the exploitation of a software bug. Code signing and secure boot are not effective in the face of intentionally shipping malicious code to end users. It’s designed to frustrate actual hackers.
For malicious-by-design apps, we rely on a central app store that hopefully reduces the number of bad apps in circulation. If you publish malware, eventually you get caught and we know who you are. Sandboxing with a permissions system helps prevent apps from performing actions contrary to the user’s interests. E.g. why is my flashlight app asking for my contacts when I pressed ‘change color?’
If you directly exploit your way in, it’s harder to know who did this and why because you didn’t go through any central vetting or accountability system, and you’re not so easily bound by the permissions system. It depends on what your bad guy’s goals are, what they want, whom they’re targeting. Force your way in the back entrance, crawl through an open window (like a weak security setting), or lie your way in the front door (trojan)? It depends.
None of it is perfect, but I’m sure OS design experts would love to hear about better solutions if any exist.
wouldn’t a malicious app still be an exploit though? I’d say that if I download an app for playing a game, but instead it was designed to also upload my private photos to the attacker’s server, i’d say that’s still exploiting. It’s just exploiting my expectations of what the app should do, rather than leveraging a system weakness (which it probably does, anyway)
Pixels with grapheneos can reboot automatically after a number of hours with the screen off (unattended because you are sleeping). But this would also interfere with Whatsapp backup, which happens overnight.
You do it because it makes an attacker’s life harder because now I have to find two bugs instead of one.
The entire boot chain of the phone up to the apps you run are verified successively by the component that loads it. A digital signature helps ensure that only trustworthy code ever runs. A bug must be found to bypass these checks to load malware code. For example, a bug in the image code in a web browser might cause loading of code that isn’t checked. This way the malware gets smuggled onto the phone.
This means that if you get hacked via one bug and malware is loaded, the attacker has to work harder to solve the problem of how do I convince the phone to load it again at boot because the code it’s made of isn’t going to be approved code. When you reboot, you are effectively forcing a validation that all the code you have running is authentic, which would exclude the malware. Trick me once sure, can you survive a full pat down? Probably not. It’ll get caught.
Unless I have a second bug to fool the normal code loading systems too, the malware can’t run. You have to go back and trigger the first bug again somehow, which places more strain on the attacker.
Thanks for taking the time to write that out. I found it really helpful.👍
I love to talk about computer security. I don’t get the chance often enough.
I hope you get more chances to do so; you explained the situation in a much better way than the article and convinced me to reboot my phone.
You restart your phone because of security.
I ‘restart’ my phone, because it’s overheated and lost its battery % to 0.
We’re not the same.
Thank you, friend. You’ve convinced me to restart my phone.
Personally, I restart mine maybe once a week. No need to go crazy with it, but it helps make life harder for bad actors and might make your phone run better.
I hope to hear you* on Darknet Diaries hahaha
If you have a blog where you talk about that, I would like to read it.
Exactly, as you already explained in detail this is primarily for security.
GrapheneOS has a feature to set a time after which the phone reboots in case there was no unlock. So in case a bad actor gets your phone they only have that time with a running system after the first unlock. However, if you use it normally, and unlock it in regular intervals it does not auto-reboot. This is especially neat if your threat level is not “investigative journalist” or “political activist on the run”, because then you can set the time to a longer interval and the phone does not reboot every night when you are asleep which also leads to the SIM card being locked and nobody being able to call you…
I remember this feature, and I wish it was a standard Android feature. It sounds like it would be trivial to implement and could be completely optional.
I wonder if tasker could do it… 🤔
I don’t think applications can reboot the phone.
Technically they can…but it requires root which within the context of this conversation yeah, you’re right, lol
But that only works for untrusted code escaping a sandbox, right? It does not help with malicious code embedded into legitimate seeming apps. The later vector seems easier, especially on Android, no?
I don’t really consider a malicious app to be an exploit. In this case, the software is doing exactly what it was designed to do – malicious activity. It’s not being manipulated to perform unintended operations through the exploitation of a software bug. Code signing and secure boot are not effective in the face of intentionally shipping malicious code to end users. It’s designed to frustrate actual hackers.
For malicious-by-design apps, we rely on a central app store that hopefully reduces the number of bad apps in circulation. If you publish malware, eventually you get caught and we know who you are. Sandboxing with a permissions system helps prevent apps from performing actions contrary to the user’s interests. E.g. why is my flashlight app asking for my contacts when I pressed ‘change color?’
If you directly exploit your way in, it’s harder to know who did this and why because you didn’t go through any central vetting or accountability system, and you’re not so easily bound by the permissions system. It depends on what your bad guy’s goals are, what they want, whom they’re targeting. Force your way in the back entrance, crawl through an open window (like a weak security setting), or lie your way in the front door (trojan)? It depends.
None of it is perfect, but I’m sure OS design experts would love to hear about better solutions if any exist.
Your explanations really are poetry.
wouldn’t a malicious app still be an exploit though? I’d say that if I download an app for playing a game, but instead it was designed to also upload my private photos to the attacker’s server, i’d say that’s still exploiting. It’s just exploiting my expectations of what the app should do, rather than leveraging a system weakness (which it probably does, anyway)
I miss my BlackBerry and it’s scheduled reboot option
Pixels with grapheneos can reboot automatically after a number of hours with the screen off (unattended because you are sleeping). But this would also interfere with Whatsapp backup, which happens overnight.
Samsung phones also have a reboot schedule option
And in addition to that, they also have automatic reboot if the phone detects performance degradation.