This Week In Security: Selfblow, Encryption Backdoors, Killer Apps, And The VLC Apocalypse That Wasn’t

Selfblow (Don’t google that at work, by the way) is a clever exploit by [Balázs Triszka] that affects every Nvidia Tegra device using the nvtboot bootloader — just about all of them except the Nintendo Switch. It’s CVE 2019-5680, and rated at an 8.2 according to Nvidia, but that high CVE rating isn’t entirely reflective of the reality of the situation. Taking advantage of the vulnerability means writing to the boot device, which requires root access, as well as a kernel flag set to expose the boot partitions to userspace. This vulnerability was discovered as part of an effort by [Balázs] and other LineageOS developers to build an open source bootloader for Nvidia Tegra devices.

The Tegra boot process is a bit different, having several stages and a dedicated Boot and Power Management CPU (BPMP). A zero-stage ROM loads nvtboot to memory and starts it executing on the BPMP. One of the tasks of nvtboot is to verify the signature of the next bootloader step, nvtboot-cpu. The file size and memory location are embedded in the nvtboot-cpu header. There are two problems here that together make this vulnerability possible. The first is that the bootloader binary is loaded to its final memory location before the signature verification is performed. The code is written to validate the bootloader signature before starting it executing on the primary CPU, so all is well, right?

The second problem with this bootloader code is that the memory load location is embedded in the firmware header, and that location is not verified prior to loading the next bootloader stage to memory. At this point, we should all know what happens once unrestricted memory writes are allowed. How exactly the exploit takes advantage of unrestricted writes is particularly fun. The header instructs nvtboot to write the next bootloader binary on top of its own signature verification routine, blowing a hole in its self, hence the name. When nvtboot tries to call the function to verify that this file is properly signed by Nvidia, it instead jumps execution into this unsigned code. It’s elegant, effective, and blows the doors open for developing an open source bootloader for Tegra devices.

Encryption Backdoors

On Tuesday, Attorney General William Barr gave a speech at Fordham University. One of the topics he talked about is back doors in encryption, specifically in consumer platforms. [Bruce Schneier] takes a look at the relevant sections from the speech, and breaks it down. His take is optimistic, as he sees the conversation shifting from a stubborn insistence that encryption backdoors are harmless. Now we can at least have the discussion about whether the societal damage from weakened encryption is worth the transparency it would provide to law enforcement.

Schneier’s position on this hasn’t changed, however. He maintains that the technology is neutral, and if you allow spying on the phones of consumers, you also allow spying on the phones of nuclear plant operators, CEOs, and elected officials. Security is security.

Code That Kills

What do you do when a medical company refuses to address vulnerabilities in medical equipment? You write a proof-of-concept exploit that can kill. In their defense, the researchers at QED Secure Solutions disclosed their killer app to the FDA and coordinated the public release after a voluntary recall.

The device in question is an insulin pump that has wireless control. The built-in authentication is limited to the device’s serial number, so the attack simply spams commands at all the possible serial numbers. Their work takes advantage of Software Defined Radio and, as tested, only works from a few feet away. But it was good enough to finally get insecure devices (voluntarily) recalled.

VLC is Vulnerable?

The VLC news this week has been all over the place. First, VLC had an undisclosed vulnerability, and then more details came out about CVE-2019-13615, first classified as a Remote Code Execution vulnerability, with a score of 9.8 out of 10. VLC has been downloaded literally billions of times, so many were claiming that billions of computers were vulnerable.

The only problem with this sensational story is that the VLC devs were publicly claiming they couldn’t replicate the crash. As more and more information has leaked out, a clearer picture has emerged. Apparently the vulnerability that was found was actually in libebml, and had been found and patched over a year prior. The researcher that re-discovered the problem was working on a Linux machine that hadn’t been updated recently.

It’s not often that we get to see such a clear breakdown between the hype and reality of a vulnerability. As the VLC developers explained on Twitter, quite a few in the security community really jumped the gun in making such a big deal out of this bug. A big share of the blame needs to go to MITRE, the organization that manages the CVE process. They seemed to have entirely failed to validate the vulnerability claim before assigning a CVE number with a ridiculously high rating.

Contrary to what you might have read; no, you don’t need to uninstall VLC right away; no, there aren’t billions of suddenly vulnerable computers; and no, the current release of VLC isn’t vulnerable to this particular bug. If you have the old libs, however, you’re long overdue for an update.

18 thoughts on “This Week In Security: Selfblow, Encryption Backdoors, Killer Apps, And The VLC Apocalypse That Wasn’t

  1. I really like Bruce Schneier’s take on this.
    I always like to see how little politicians understand the real world when they keep saying ‘government’ is different than ‘consumer’ and ‘enterprise’.

    The software and technologies are now all the same.

    Speaking as someone who got to work on military equipment, that is not necessarily a bad thing either.
    Military systems are almost relying on security through obscurity, which is never a good or practical solution.
    And while some of the encryption technologies are robust and frankly, really cool, the underlying infrastructure is many years out of date, with surprisingly lax manufacturer interest in updating or patching.

  2. Scott Hanselman (who is the Principal Community Architect at Microsoft)
    Has a lot to say why it’s a GOOD THING you can HACK YOUR INSULIN PUMP. and a little write up is very lazy writing from the amazing hackaday team over such and important subject.

    https://www.hanselman.com/blog/TheExtremelyPromisingStateOfDiabetesTechnologyIn2018.aspx

    HACKING the pump is a MUCH more interesting story than finding out that some amateur tried brute forcing a device and getting through. I also did something like this when I was 15. Congrats to me too

    1. I would agree with you if hacking the pump required a physical connection. It would be awesome for a user to be able to modify their own device… not so much for someone else to modify it without their consent.

      In my opinion, the refusal of the manufacturer to address the security of a medical device that people need to survive, (and that can be used to kill them) is the story here. A proven exploit, brute force or not, shouldn’t be the only reason a medical device manufacturer issues a recall. A VOLUNTARY recall at that!

      It’s insane that the rigorous reliability testing that these devices must go through to be approved by the FDA does not include penetration testing on the wireless control system.

    2. An important note on hacking an insulin pump. (P.S, I have no experience with insulin pumps, but I do have some experience with industrial control systems and industrial oven controllers. This is where I am basing my experience)

      I accept your statement that there is value in hacking the pump to turn them into closed loop control systems when connected to an implanted continuous glucose monitor. I assumed they were open loop control systems that you had to load in a profile like the heat/cool profile on an industrial oven.

      If the user wants to hack the device to make their own profiles, then is there a validation tool that they could use to verify if what they are doing could be life threatening before they install it on their pump?

      I’m bringing this up, mostly due to a hard to find error I once introduced into a bacnet gateway device which required me to write my own interface in the device scripting language to bridge between modbus devices and the bacnet setup in a hospital. This had a near career ending result of shutting down the hot water systems in a hospital for a few hours.

  3. ” Now we can at least have the discussion about whether the societal damage from weakened encryption is worth the transparency it would provide to law enforcement.”

    What did law enforcement do before encryption was widely available? Never mind bad people as a rule don’t have to use weakened encryption.

    1. Encryption has been around for about as long as there has been writing. I don’t think journalism was one of the first uses of writing, so we’d only know what law enforcement was like before encryption if there were oral stories passed down about crime prevention.
      “And lo did Tiabeth say unto Markadon ‘What was your location on the evening of the fourth day of the moon?’ and did Markadon reply ‘I was abiding alone within mine family domicile’.”

  4. as always, my argument is that the people who say that we should backdoor encryption should be the early adopters and have to deal with the consequences of their actions. The problem is that computing has become so ubiquitous in our lives and so few people actually understand the concepts and ideas that we have based it all off of that it sounds like a good idea when law enforcement says that backdoors will help them catch the criminals. They fail to understand:

    – Law enforcement individuals will abuse the system for personal reasons as they have several times before.
    – Such systems will hardly be securable to only law enforcement
    – Backdoors in communications need to be for every type of communication else the criminals will use the type that isnt compromised
    – backdooring all encryption will cause a loss of trust in anything done or read online
    – backdooring will have such a profound impact on the economy and in the bad way as there will be no way to trust any e-commerce
    – backdooring will invite even more foreign influence into domestic affairs.
    – criminals wont stop doing something just because you made it more illegal
    – law enforcement really only has the resources to catch the dumb criminals or the big whales, the big whales will not be using backdoored encryption so this wont make their jobs any easier and there are too many dumb criminals to catch them all.

  5. Confusingly, “to effect” is also a verb, meaning sth like “to cause sth.”.

    But yeah, the article should use “affects”, not “effects”.

    Disclaimer: English is not my native language, too, so I generally stick to the noun “effect” and the verb “to affect”.

    1. Well, if you want to start correcting things, the use of “it’s” is usually wrong, as in the 3rd aragraph. “It’s” always means “it is” while “its” is the possessive form. BTW, sources tell me that the confusion between effect and affect has been around for about 500 years.

  6. Going to throw this under anonymous, to avoid the backlash (and in the event I’m wrong)… First sentence, should be “affects” rather than “effects”, right? In such situations, I often bow to the “experts” and use the word, “impacts”. Overall, a good article though and this is admittedly a petty reply.

    Let the flames begin…

      1. And to confuse the situation further, even native English speakers blow this and forget about the all too common special cases.

        Affect as a noun… “Mr. Smith’s affect today appeared sad, as demonstrated by his facial expressions.”

        Effect as a verb… “A longer wrench will effect greater torque on the bolt.”

  7. Regarding crypto backdoors: It’s always been an intelligent move to assume government agencies had faster than brute methods for “secure” ciphers. You know they got the infrastructure and compute power for such methods..

  8. Government officials seem to fail to understand that putting back doors in mainstream products just means the real criminals they want to catch will just use something else… Not to mention that it is easier for law enforcement to attack a suspects computer than it is to try and decrypt the traffic after it has been sent..

    These back doors aren’t about watching criminals; they are about mass surveillance of the whole population. And if anybody thinks having the government knowing everything leads to less crime, they don’t understand people or history…

  9. I tell you all freely, true security is a misconception, and surveillance is great, provided it is implemented correctly. The issue is sense of security, rather than implemented security methods. Regarding security of radio devices, in consumer environments this is rather poor, due to lack of penetration.

Leave a Reply to tekkieneetCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.