See the entire conversation

I'd like to tweet a series of findings that are typical for an embedded device, in ascending order of severity. This could be any device, but it's fairly typical for a Linux-based IoT product.
52 replies and sub-replies as of Nov 19 2017

1. Out-of-date CA bundle - lots of devices have a CA bundle from 2012 or before. This means that many certs are out-of-date and there are some no longer trusted CAs. Developers will switch off cert validation as a result.
2. Device lacks secure storage - the SoC used by the device has no provision to securely store keys or confidential material. An attacker with physical access can recover keys, certificates, hashes, and passwords.
3. Factory reset not correctly implemented - either configuration (such as user's SSID/PSK), authentication information (certificates etc.) or data (stored videos) are not deleted or renewed when reset.
4. Encryption implementation issues - a custom protocol used by the device does not implement crypto correctly. Examples - encrypt without MAC, hardcoded IV, weak key generation.
5. System not mimimised - the system is running services and processes that aren't used. It's common to find a web UI running on a system, but undocumented and useless to consumers.
6. Serial consoles enabled - either raw serial or USB serial is enabled, allowing either the bootloader, a login prompt, or a unprotected shell to be accessed.
7. WiFi connection process exposes SSID/PSK - it's very common for devices to use a WiFi AP or BLE to allow the app to communicate the user's SSID/PSK for connection. Often in the plain. Attacker needs proximity physically.
8. Firmware not signed - this allows someone to create malicious firmware and deploy it to devices. Firmware not encrypted - this makes it much easier to examine firmware and find issues.
9. Busybox not minimised - busybox has been built with every single tool possible, providing a rich set of tools for an attacker to use.
10. Root user allowed to login - the root user either has no password, or a hardcoded password. Another vulnerability will allow them to login and use the system.
11. Compile time hardening not used. PIE/NX/ASLR/RELRO/Fortify haven't been used. They make exploiting buffer overflows harder.
12. Unsafe functions used - strcpy/sprintf/gets are used heavily in binaries found on the system. These are closely associated with buffer overflow-tastic systems.
13. All processes run as root - no principle of least privilege followed. Lots of devices could do this, but don't. No need to privesc when compromised.
14. Device does not validate SSL certificates - the HTTPS communications used by the device can be man-in-the-middled by an attacker. Can lead to serious compromise, especially if firmware updates delivered by this mechanism.
CONCLUSION: The device and system can't be immediately compromised. But in the event of another vulnerability being found, there is little stopping an attacker from totally owning the device.
Once again, the problem is adversarial research has conditioned vendors to ignore these level of findings. They aren't remote root access, not every device has been compromised. So often, these issues will not be fixed.
This is a real problem. The solution isn't pen-testing and remediation. The solution is secure development practices. All of these are known.
I wonder if an IoT security-focused microkernel could gain enough traction to minimize some of these issues. Stuff like https etc could be kernel enforced minimizing dev ability to take part in bad practices
I do think it's safe to say that as long as bad dev practices are possible they will be taken. Most devs are never taught the risks of writing unsafe code. Spend 20 minutes on stack overflow and you'll see several examples of bad/dangerous advice.
Combine that with the fact that, unfortunately, a lot of embedded devs have no idea what they're doing and copy-paste code, and you have a situation where if devs are left to their own devices, they'll violate best practices :/
There needs to be some way to enforce good coding/development practices, or the responsibility for safe coding needs to be entirely lifted from the dev's shoulders, imo.
I am often conflicted about the solution to this. Give them C and you get buffer overflows. Give them a scripted language and you get logical errors. Which is worse?
Example along those lines being @electricimp - it is very hard for the dev to mess up the device side with this.
Logical errors that do not affect key, upgrade or link security are much less bad, though, which is why our architecture looks like it does. The problem with sharp tools is that they are sharp. Using them only when necessary is good practice.
Looks like it's a tough thing to sell, e.g. brian.mastenbrook.net/blog/2016/all-…
You forgot my fave - universal device secrets. When compromise of one device leads to secrets (keys, passwords, certificates, even VPN details) that can lead to compromise of all devices or hosted services. (though 3. Kinda covers it)
This is pretty fundamental and needs to be taken into account early in the design phase so that your implementation doesn't require secrets on the device. Patching this later can be costly and lead to using crypto, which is easy to mess up.
Yep - the system should be designed so that it only holds keys that impact it. But this is still rare.
Good list. Although, "fixing" some of these increases the cost of 3rd party security or forensic analysis. Any thoughts on the trade-off between "raising the bar" vs forcing whitehats to write jailbreaks? :)
I agree. I have a lot of thoughts around this. Maybe a tweet thread tonight or tomorrow?
All of this also conspires to make fixing devices in the field properly quite hard, and expensive. Attempting to fix will often also surface new issues, leading to a cascading set of problems; one card, and the whole house comes down 😕
Even a low impact vulnerability could be a doorway for a huge attack. Companies fail to understand that.
Wait I can understand no ASLR i guess, but doesn't NX require intentional disabling nowadays? It's 2017
I mainly put that there for completeness, but it isn't always enabled.
Code injection in the stack hasn't been a major threat since 2005 what on Earth...
It is one of the things I can't explain.
No measures to check if firmware was tampered with on boot
Funny, I know it's a bad thing, but the console makes a lot of devices very useful to me.
I agree. It's a dilemma really - do you cater for hobbyists/makers/whitehats by having a serial console? About to do a tweet thread.
hehehe.... I once found a satellite router which went out of the vendor with a copy of Doom pre-installed :-)
who are the developers here, and why would they update to switch off validation instead of updating the bundle? trying to understand as it sounds like outdated bundle would just break the device’s connectivity instead.
The bundle is there. They run curl, it doesn't work. -k, simple solution.
I’d add secure boot in relation to signed firmware. IC can validate external FLASH before execution.
Totally agree. I normally have that as a separate finding, and probably should have put it here.
I just learned a metric ton from this one thread, thanks a lot!
Cool! That is what I want to happen.
Recommended reading.
I'd like to tweet a series of findings that are typical for an embedded device, in ascending order of severity. This could be any device, but it's fairly typical for a Linux-based IoT product.
Made a moment out of this to keep it more organized 🙃 thanks a bunch!