This Week In Security: Insecure Chargers, Request Forgeries, And Kernel Security

The folks at Pen Test Partners decided to take a look at electric vehicle chargers. Many of these chargers are WiFi-connected, and let you check your vehicle’s charge state via the cloud. How well are they secured? Predictably, not as well as they could be.

The worst of the devices tested, Project EV, didn’t actually have any user authentication on the server side API. Knowing the serial number was enough to access the account and control the device. The serial numbers are predictable, so taking over every Project EV charger connected to the internet would have been trivial. On top of that, arbitrary firmware could be loaded remotely onto the hardware was possible, representing a real potential problem.

The EVBox platform had a different problem, where an authenticated user could simply specify a security role. The tenantadmin role was of particular interest here, working as a superadmin that could see and manage multiple accounts. This flaw was patched within an impressive 24 hours. The EVBox charger, as well as several other devices they checked had fundamental security weaknesses due to their use of Raspberry Pi hardware in the product. Edit: The EVBox was *not* one of the devices using the Pi in the end product.

Wait, What About the Raspberry Pi?

Apparently the opinion that a Raspberry Pi didn’t belong in IoT hardware caught Pen Test Partners some flack, because a few days later they published a follow-up post explaining their rationale. To put it simply, the Pi can’t do secure boot, and it can’t do encrypted storage. Several of the flaws they found in the chargers mentioned above were discovered because the device filesystems were wide open for inspection. A processor that can handle device encryption, ideally better than the TPM and Windows Bitlocker combination we covered last week, gives some real security against such an attack.

Now Linux on the Pi can certainly do an encrypted filesystem, but the real problem is the storage of the encryption key. Without a secure enclave in the SoC, it’s very tricky to have an encryption key that isn’t trivially read by an attacker with physical access. On a laptop it’s not a problem, since the user can provide a password that’s used as part of the encryption key, but who wants to type a password into every IoT device every time they power them on?

Snapcraft Sideloading

[Amy Burnett] found something on an Ubuntu system that didn’t make sense — A Docker command throwing a segfault. What was even weirder, it only happened when running the command in a particular folder, where a libc.so.6 file was also stored. Her security-sense tingled. For some reason, that library file was probably getting loaded when the docker command was run. A quick strace confirmed the theory, but why was that happening? The answer is a security vulnerability in Ubuntu’s new Snapcraft package manager. Ubuntu has started providing certain programs as snaps rather than traditional packages.

The culprit is the Snapcraft logic used to build the LD_LIBRARY_PATH local variable. If one of the variables used to build that variable is blank, you end up with a double colon as part of the string. Linux interprets that as the current directory, and hence running a package installed via Snapcraft will potentially load dynamic libs unintentionally. A suggested attack is to distribute a video file in an archive, and including a malicious library. Any user that just extracts the files and plays the video in a Snapcraft installed player will automatically load the malicious library. The problem was tracked as CVE-2020-27348 and fixed late 2020.

Request Forgeries

A trio of stories about request forgeries surfaced this week, the first being a cross site request forgery (CSRF) on OkCupid. To start, a CSRF attack is when visiting one website can trigger an action on a different website. Cross-Origin Resource Sharing is supposed to be the solution to this problem, but there are caveats that you should know about. The important one here is that an HTML form can send a POST off to another domain, even if the CORS header isn’t set. The common way to protect against this attack is a CSRF token than confirms that the request really is coming from an approved site. OkCupid didn’t use these tokens, and as such it was possible to build a web page that triggers an action on behalf of the user.

One of the other common request forgery patterns is the Server-Side Request Forgery, (SSRF). This one is a bit different: Here we fool a server into generating an unintended request. This is usually in the context of a front-end server sending traffic to non-public back-end services. Here it’s the ability to include an internal URL as a parameter for calling the Facebook API. It seems that the API endpoint naively accepts any URL as a valid image, even if that location isn’t one that shouldn’t be publicly accessible. In this case, this action leaked the contents of the internal endpoint, allowing the researcher to snag a canary token, and score a pair of $30,000 bounties.

The final story broke late in the week, and it’s all about HTTP/2. This relatively new protocol is a potential replacement for HTTP, and is all about making the web quicker and more flexible. Guess what comes with a new protocol. Yeah, new creative ways to break it. [James Kettle] of PortSwigger covers quite a few potential vulnerabilities related to request smuggling, mostly involving HTTP/2 translation to HTTP 1.1 by a front-end server. These attacks are things like including colons or newlines in HTTPS/2 fields, where those symbols are interpreted differently once translated to HTTP 1.1.

The most important vulnerability announced is probably CVE-2021-33193, a flaw in Apache’s mod_proxy. It’s a problem where whitespace in an incoming header is understood differently by the front-end HTTP/2 server than it is on the back-end. This allows an attacker to ask for a privileged endpoint, say /wp-admin, but disguise that request as something uninteresting. This maneuver can bypass the access rules and allow access to these locations. The flaw is fixed in Apache master, and will be part of 2.4.49 release, but here we are talking about the vulnerability, and 2.4.49 isn’t yet out. If you run a vulnerable server, it might be time to go disable HTTP/2.

Hallucinating TLS Decryption

SySS just released Hallucinate as an open source project. This project is all about decrypting SSL traffic, not on the wire, but by hooking into the OS or application that is doing the encryption. The potential use cases are quite wide. Trying to figure out what data a closed-source binary is sending up to the cloud? Troubleshooting a hard-to-pin-down bug in encrypted data? Hallucinate might just help. It can spit out a decrypted PCAP file, or even run Python scripts to manipulate encrypted data in real time. Definitely a useful trick to add to your library.

Google’s Take on Kernel Security

[Kees Cook] of Google’s Open Source Security Team published a post this week, talking about the state of security in and around the Linux kernel. He makes the point that while the kernel runs very well when things are working properly, when it breaks, it can break in insecure ways. Put another way, he would like to see more work done to make the kernel resilient to compromise even in the case of flaws. While the changes needed to do this aren’t spelled out in the post, I can only think of efforts like adding Rust to the kernel and doing additional address randomization.

The majority of the post isn’t aimed at the upstream kernel, but at downstream integrators. The advice here is simple. Track the latest release or stable kernel. Don’t use a 10 year old kernel. Is that a challenge because you have so much out-of-tree kernel code? Upstream your changes. It makes everyone more secure. Rather than spending so much engineering effort backporting fixes to your ancient kernel, spend that effort making the upstream kernel more secure. It’s interesting that he ends the article with the opinion that the Linux kernel and toolchain needs about 100 more skilled engineers to be effectively maintained.

13 thoughts on “This Week In Security: Insecure Chargers, Request Forgeries, And Kernel Security

  1. Complaining about the Pis lack of secureboot and encryption seems a bit inane to me. It was never meant for most of us adults. Wasn’t it a device for young learners? We invaded their ecosystem, we are all welcome in it, but the compute modules and all are just a side business. Why the high expectations towards a learning device.

    1. yes. I’m a bit late here, but yes, it was not only meant and created as an educational device, more for electronic engineering students than “young learners”, but that was why Intel nor any of the big SBC players didn’t immediately crush them financially with competition. It is a weird, through-the-looking-glass version of IBM deciding to make the first IBM PC with off-the-shelf parts, a “crippled” 16-bit, 8-bit bus 8088, and a cobbled-together operating system from the provider of microcomputer BASIC, Microsoft. Of course they always had something like OS/2 in mind, but a lot of these decisions were made so that the PC wouldn’t cannibalize their existing lines of terminals, minicomputers and mainframes. And this plan backfired badly.
      For Raspberry Pi, there were decisions made to go cheap rather than functional, and they were worried about being a threat to competition from from without rather than within. That’s specifically why it was allowed to be so cheap, not Open Source Hardware, but still pretty “open” and definitely with Open Source software Operating Systems in mind. It’s amazing how Gabriel’s “Worse is Better” essay applies in these cases. Not only is complaining about the lack of security in the RPi ecosystem silly, but the fact that this microprocessor board is being used where microcontrollers are obviously the better solution, and its use in production products at all, is silly. It’s a kludge. But it’s also testament to “hacker” (in the old sense) ingenuity, and being able to throw 100 code-monkeys at problems is often more cost-efficient than MIT or Berkely PhDs with LISP-machine elegance. The fact that “Worse is Better” is about Unix and C replacing LISP, yet it perfectly captures the entire “Wintel epoch”, and Android/ARM over both Win10 phones and Linux phones, and in this case RPi ARM/Linux/Python over microcontroller and even FPGA solutions.
      Sorry for being long-winded, but I had to correct “Rasp Pi is for children”, last I checked college students like to be referred to as adults, and perhaps confusion here with the BBC Micro:bit. Yes there’s been outreach from RPi to schoolchildren, but that was after it was established in higher education. (just like AD&D, lol). The logical move when starting anywhere in the education market. You’re post is so-on-target, besides that little condescending, patronizing, ignorant point.

    2. I am quite surprised at this inane trend to use Raspberry Pi in commercial devices… It is a wonderful learning tool, but it is still essentially a “toy”, a completely “open tool”… This is for learning and small local projects… it is an accessible and financially easy device to get ones toes wet in the wider world of microprocessors and coding.

      It is an awesome device for controlling and operating smaller things that a person might want automated or needs more complex control, but it does NOT need to be connected to the freaking internet all the time!! That is my biggest complaint about the IoT…. just because you *Can* does not mean you *should*… yes there are specific reasons to do so in some cases… but substituting “convenience” for common sense and security is a fools game.

      I’ve always hated the world of multi-thousand dollar systems that you needed a full blown company to afford it and those were the only places to learn and practice… You needed experience to get into the places where you could get the experience that they required. That is why it’s fine to use a Pi to figure out the Python or other coding and working out the issues with an inexpensive and easily replaceable piece of equipment, but then it should be ported to the final secure working hardware… The Pi is like an erector set… fantastic for learning the concepts and working out any problems, but you DON’T sell the erector set model as the final product!

  2. Unless I’m missing something, the mention of WP-admin in the HTTP/2 issue is disingenuous – WP-admin can be freely requested on most WP installations – access is controlled by WP itself, not by any server rules, and it’s not intended to be protected by the server rules. Indeed, most sites using Ajax would use it as part of the front-end behaviour, so blocking access would be unusual.

    1. Yup, re-reading the whole linked article, the vulnerability is nothing todo with WordPress, it’s about how one particular server (bitbucket) had a specific rule set for WP-admin on their vulnerable HTTP/2 downgrade; ironically probably for security reasons, but hey…

  3. Complaining that the Pi doesn’t have the ability to hide encryption details from physical access is kinda daft – with physical access its always a question of how long/how much effort they can be bothered with to get in. Secure hardware does not really exist… Its like any physical lock its not that it can’t be opened without the right key, just that opening it with picks, or even needing really special tools makes it hard to get access covertly.

    I would say the Pi CM is actually a better option than many others – because its so close to properly open – well documented, supported, and configurable etc it allows folks like the security researchers to find and fix all the remote exploits more easily – which are more important, the local exploits might be be somewhat fixable too – the Pi can handle a great deal of extra hardware and it makes a failure less likely to turn the whole thing to e-waste.. The compute module have long term availability, and if something on its carrier board fails and that board isn’t easy to replace its still a useful computer…

    Its not that I can’t see their point somewhat, and there are times I’d agree, but when physical access should require heaps of bolts removed, probably some of those warranty void style stickers, worrying about that side of it seems overkill – just secure the internet facing properly. That the hardware isn’t as tough as it could be to physical access really doesn’t matter, as no hardware is truly secure to that…

    1. Except that the Pi’s level of secure boot support (none whatsoever) is so nonexistent that physical access has nothing to do with it.

      Gain root remotely and there’s nothing to prevent someone from overwriting the bootloader.

      At least with UEFI Secure Boot there is at least an attempt to ensure that someone is physically present if an attempt is made to change/add public keys. The Pi’s boot chain doesn’t even attempt to make a minimal attempt at this.

      1. If you gain root remotely the bootloader is hardly the biggest or most interesting target – in some cases maybe it is the best target for you, but the functional system and all its toys and secrets are basically wide open when you get remote root, secure boot or not, and for a car charger owning the OS is more than enough – point it to update from your poisoned well, or just never to find updates and you own it indefinitely. Its not some super secure facility under great scrutiny so you have to get sneakier with bootloader and firmware poisoning to get more persistent access. It isn’t going to reboot in any way that can break your remote ownership if you did it right or full of features that will need power cycling with firmware/bootloader level changes to abuse…

        My point was not lacking security features is good, more that being more readily researchable and better documented means flaws are much more likely to be found by the ‘good’ guys to prevent that remote root in the first place – which is actually the important bit! While also pointing out physical access is a lousy complaint – as nothing is actually secure to that, just maybe, possibly, if you are a low value or hard target it is enough faff that perhaps nobody has or ever will bother…

  4. I agree with the comment above – that is a really big difference to something hackable over the net, and something that requires physical access.

    For example, the car charger could well be in a locked garage, all behind a secure front fence. Yes, if someone breaks in they can compromise a IOT device that has it’s encryption key stored in flash or sd card – but that is many many many orders of magnitude lower problem than being hacked over the web!

    Yes, physical unhackability is important in some things. I wouldn’t have thought a private car charge was one of the lead cases though..

    1. Many chargers will be either outside the front of a house – in some countries it’s not normal to keep cars in a garage, even assuming you’ve got a garage – or public chargers.
      Security of a public device is much more important. A compromised device could – at the friendly end -be set up to steal keys and use them to get free charging, or – at the unfriendly end – be set to overvoltage certain cars to damage them, or potentially compromise the car itself via the charging port.

  5. > The EVBox charger, as well as several other devices they checked had fundamental security weaknesses due to their use of Raspberry Pi hardware in the product.

    This is infact, not true 🙂 EVBox does not use Raspberry Pi’s in any product in the field. The original https://www.pentestpartners.com/security-blog/smart-car-chargers-plug-n-play-for-hackers/ article also does not mention this :p

    (Internally, EVBox does use plenty of raspberry pi’s for other reasons of course.

    Maybe @editors can fix this 🙂

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.