36C3: Open Source Is Insufficient To Solve Trust Problems In Hardware

With open source software, we’ve grown accustomed to a certain level of trust that whatever we are running on our computers is what we expect it to actually be. Thanks to hashing and public key signatures in various parts in the development and deployment cycle, it’s hard for a third party to modify source code or executables without us being easily able to spot it, even if it travels through untrustworthy channels.

Unfortunately, when it comes to open source hardware, the number of steps and parties involved that are out of our control until we have a final product — production, logistics, distribution, even the customer — makes it substantially more difficult to achieve the same peace of mind. To make things worse, to actually validate the hardware on chip level, you’d ultimately have to destroy it.

On his talk this year at the 36C3, [bunnie] showed a detailed insight of several attack vectors we could face during manufacturing. Skipping the obvious ones like adding or substituting components, he’s focusing on highly ambitious and hard to detect modifications inside an IC’s package with wirebonded or through-silicon via (TSV) implants, down to modifying the netlist or mask of the integrated circuit itself. And these aren’t any theoretical or “what if” scenarios, but actual possible options — of course, some of them come with a certain price tag, but in the end, with the right motivation, money is only a detail.

Sure, none of this is particularly feasible or even much of interest at all for a blinking LED project, but considering how more and more open source hardware projects emerge to replace fully proprietary components, especially with a major focus on privacy, a lack of trust in the hardware involved along the way is surely worrying to say the least. At this point, there is no perfect solution in sight, but FPGAs might just be the next best thing, and the next part of the talk is presenting the Betrusted prototype that [bunnie] is working on together with [xobs] and [Tom Marble]. That alone makes the talk worth watching, in our view.

29 thoughts on “36C3: Open Source Is Insufficient To Solve Trust Problems In Hardware

  1. “At this point, there is no perfect solution in sight, but FPGAs might just be the next best thing, and the next part of the talk is presenting the Betrusted prototype that [bunnie] is working on together with [xobs] and [Tom Marble].”

    Neither was there one for software. Everything is “best effort”.

      1. I think it means that no, you can’t solve this problem 100%, but if you make a good effort, you can get closer to 100% and that is still preferable to the current state of things.

  2. The problem with hardware is that validating that something really is what it states on the can is frankly impossible in the vast majority of cases.

    Open source is frankly not a solution.

    Since you have little to no means of validating that something actually is what it supposed to be, then you can’t simply toss all the documentation out into the wild proclaiming that it will be safer.

    From an error standpoint, “yes”, it will be safer. More people can look for flaws in the design, or potential backdoors.

    But, when you want to put a chip in production, and have it secure, you preferably want no one to know the design. Since then any possible assailant will need more time to reverse engineer your design and figure out a way to exploit it and hide their exploit. Not to mention also put their exploited version into production and then contaminate your supply lines.

    If you make your design open source, then you kinda just saved them a lot of time….

    Now, some chips remains in production for long enough for this type of security to wear off.
    Would be rather annoying if a CPU, micro, DSP, FPGA, etc, only were in production for 2 months before going totally obsolete…

    Then we have the other type of security. And that is to know your source.
    If you buy a PIC32 micro from Ebay, or Amazon, or Aliexpress, or some random dude on the street, then you might have a genuin PIC32, or you might have cheap clone, or something else entirely, who knows…
    If you buy the same PIC32 on Digikey, Mouser, RS components or any other reputable store, you can be more certain that you indeed have an actual PIC32 from Microchip.
    But you can get one better than this, but then you might not get the best price, and likely need to buy a lot of them. But you can go directly to Microchip.

    Now, this isn’t only true for Microchip, but it is true for practically any semiconductor vendor.
    Though, you risk paying a premium for factory pickup if you are that paranoid…

    This effectively means that you have no supply line to talk about here, but it is a lot of hassle. (technically, you still have a supply line, can you trust your employees? Or that you kept a watchful eye on the box during transport? Fabs are located in the middle of nowhere at a lot of times, so expect a long journey…)

    Though, the chip itself could have an exploit in it already… Yes, the hypothetical assailant could have contaminated the genuin source itself…. Sounds far fetched, but I have heard of it happening a few times.

    But checking that a chip indeed is what it states to be is hard.
    If it is open source or not doesn’t really matter.

    Now, an open source chip does have the advantage that if you find out that vendor A has a flaw, then you can always go to vendor B or C. But this also opens the door for more supply line attacks….

    In the end, open source doesn’t really make a chip any more trustworthy than any other chip.
    What matters is the supply line, and the honesty of the manufacturer.

  3. You don’t even need to modify the hardware in any way. Since the specs and the design is public information, all you do is poke holes in it and then tell nobody else what you found. Wait till everyone’s using it, and exploit the vulnerabilities.

    The trick works because the people who design the chip don’t have infinite resources, time or interest, to keep changing it – they have to put it out to use it some day – so when the design is committed to silicon and made in the millions, it becomes difficult or impossible to fix any leftover errors.

    At which point it becomes a dumb thing that everyone can see exactly what’s inside the chip and how it’s made.

    1. This goes for ANY hardware. Except for hardware that’s potted in unobtanium, if you don’t mind destroying a physical object, you can unlock pretty much all of its secrets. The argument that “giving them the source code makes it easier for them to find things to exploit” has been one of the arguments against open source software from the beginning. It was nonsense for software, and it’s nonsense for hardware. The inventor of the Yale lock probably didn’t like the idea of telling the world how his locks worked, but a) without telling how it worked, no patent protection, and b) anybody with a grinder could figure it out for the cost of one lock. And yet, 150 years later, we still lock up most of what we want to keep others out of with Yale locks. Why? Because most of the time, it keeps people from walking in and taking what they want – they have to put in some effort, and expose themselves more readily to being caught.

      With ANY hardware, open or not, the buyer is at risk for being sold a cheap knock-off that doesn’t work as well as what he thought he bought, or even at all. Just ask the guy who noticed (after it stopped running) that the watch he just bought is a “Rolax”. How does open source affect this? I do understand that it makes it easier for fakes that kind-of work to be made, but we already have people duplicating the masks of chips, so they don’t even need to know how something works to counterfeit it.

      How do you know, when you buy foods labeled “organic” for a premium that they actually followed the labeling laws?

      How do you know that the airliner you just got on had its design changes thoroughly scrutinized before it got its type certificate? Oh wait – that wasn’t a counterfeit.

      It always comes down to trust. Open source has nothing to do with it.

      1. And as these pages have pointed out repeatedly, closed-source cihps get ripped off, copied, and it’s possible that overseas production facilities have made functional changes to hardware. Open source isn’t the problem. It does make verification for people who aren’t the designers easier, though.

      2. > It was nonsense for software

        You say so, but it’s not. Not just anybody has the resources or skill to reverse-engineer closed source hardware or software, so the number of (bad) people poking holes in it is dramatically reduced.

  4. What Bunny is saying, is that with open source software, we were collectively actually able to DO something about the many ways that software can be adulterated. But (as he says) this affected both open and closed source software; it was a problem with software in general.

    Now, because we have the same problems with hardware (actually, we’ve had problems with counterfeit hardware for longer than there has BEEN software), he sees this as a problem. It’s a problem that no longer exists in software, BECAUSE IT’S BEEN SOLVED.

    And, yes, it is a problem. But it’s not a problem with open source hardware; it’s a problem that comes up whenever you’re dealing with people.

    If anything, the tricks we’ve learned to ensure that the software we trust is actually trustworthy, were a major technological advance that helped the whole industry. So where are the hardware people with this? Where are the technological advances that make fake or adulterated hardware easier to spot? Where is the hologram on my engine block? I’m still having to rely on articles where somebody shows that the fake chip is marked with ink, while the genuine one is etched or lasered.

    1. With software back in the day ppl were muttering about reliability, durability and warranties, then so a bogeyman appeared in the shape of viruses, so software houses started guaranteeing their software virus free. So we’re at that place with hardware I guess.

  5. Meh I have heard the same thing against open source software
    “you cant validate it!” “you cant trust it!” “It wont solve security issues!”

    My answer is yes you can, yes you can and yes it can.

    Locking things behind binary blobs has become all too commonplace on hardware and it has lead to hidden vulnerabilities that now have become known such as spectre and meltdown and such vulnerabilities so far have been hard to patch.
    Closed source doesnt make one “more secure” by having things closed off, locked in a safe as someone will break in as its a tempting target.
    Yes this is hardware not software and will be harder to tackle then software but it can be done.
    The only arguement against it is that its not been fully tried before, but there are ways to make some standards without going draconian.

  6. Open source is clearly not sufficient; given that any actual piece of hardware is effectively(and unavoidably) analogous to a compiled binary, with a ‘compiler’ that costs a zillion bucks and lives at TSMC so you can’t do a trivial ‘compile the source yourself and see if it matches the provided binary’ check.

    The question, to my mind, is whether it is necessary but not in itself sufficient; or whether there are sufficiently clever mechanisms to prove that a part is what it says it is without providing a description of ‘what it is’ that is effectively ‘open source'(albeit possibly without a permissive license).

    Naively, it’s a lot harder for me to imagine verifying a piece of hardware’s behavior if I’m not even told exactly what its behavior is supposed to be. Merely having a description of what it should be and do isn’t enough, there are all sorts of sneaky ways to hide things with exotic trigger conditions or simply stash them in places where analysis would be destructive or expensive; but things certainly seem even less hopeful if you aren’t even allowed to know if something is or isn’t actually suspicious or anomalous.

    1. You have two related, but opposite problems: one is to verify that your chip fills the specification, the other is to verify that there is no functionality outside the specification. The latter is obviously much harder.

      1. Not harder. Flat out impossible. Any kind of modern computing hardware is sufficiently complex that it’s effectively impossible to test against every and all possible operating circumstances that can possibly occur, any one of which might be engineered to trigger activation of some hidden functionality laying dormant and 100% undetectable before that. Only perfectly reverse-engineering the silicon structure (and the data embedded within) of a chip could get you that, but that’s as much an unfeasible as an imperfect “solution” – congrats, you just destroyed the part you perfectly validated; are you sure the next one is perfectly identical…?

        Structure-agnostic hardware (such as FPGAs) is one step closer to that ideal, exactly because it is supposed to work based on structure you introduce into it instead of having it factory built-in (and potentially tampered with). It’s still a pipe dream though – you may have just made any potential attacker’s job far more difficult, but they still may have tampered with your FPGA before it got to you in some way that will effectively result in you getting pwned as soon as you start using it; it can still contain any number of “extra” features undermining your security – not giving a crap about internals and just watching any I/O, then exfiltrating anything that looks like a keyboard input is just a trivial example.

        Ultimately, I’m not ruling out something might come around that makes hardware tampering effectively impossible or trivial to detect (perhaps something like single-gate- or even transistor-level smart sand you can direct yourself to assemble itself into a CPU – allowing you to effectively emulate current silicon foundries at home – where each component is both individually testable and too basic to hide anything in) but I suspect it will be looking nothing like what we currently have… that stuff is irredeemable if you have to contend with chip-level tampering, possibly directly at fabrication.

      2. Uh, the latter is impossible. “Make sure this doesn’t do anything it isn’t supposed to.” Just how the heck are you supposed to do that? Try every possible stimulus? There are a lot of possible stimuli.

  7. The broad strokes of this ‘betrusted’ enclave remind me a bit of the design of the old Sectera Edge. That device had a full(relatively speaking) featured WinCE-based side that was hardened up but not strictly trusted; and then a secondary, much more rudimentary, display for the part of the system that was verified to a higher standard.

    Obviously pricey General Dynamics Mission Systems DoD gear is…not exactly…a hotbed of F/OSS design sensibilities; but the design principle of implementing a minimum-necessary featureset with an avoidance of complex components for the highest trust operations looks conceptually similar.

  8. It would be fairly easy to validate simple software on simple hardware, say C64, Spectrum, or Apple 2 complexity.

    Answer then is simplicity for full security. The more stuff you add, the harder it becomes to plug all security holes.

    Just my 2 cents .

    1. You’d even make those more complex to construct though and increase their size. Because you’d want to replace the ULA in the Spectrum with TTL and the same for the custom chips in the C64… (Later apples could be similar, but early versions of the II might be more discrete)

      1. Just wanna make the amused remark that that is exactly what the Spectrum clone I grew up with did – it had no ULA but a handful of TTL chips instead; naturally, more out of necessity due to parts availability in an East-European communist country at the time* rather than any security considerations… :)

        * funnily enough manufacturing our own Z80 (+PIO, +SIO, etc.) clone was apparently no problem at all somehow, but that’s just how things worked…

      2. Yes, I think simple software and simple hardware, preferably of TTL variety with 6502 or zip or something similar would be ideal.

        Parts should be freely available so everyone can make it’s own computer.

        In this way, no manufacturer, either software or hardware, can lock up the garden.

        1. But the point is, we’re not living in an age with the limitations of the 1980s. You can make a chip look exactly like a 6502, but have several extra CPU cores (mainly to bring the power consumption up to a realistic level) and a transmitter built in the calls home to mama at 4 AM every day.

          Saying “things are too complicated” doesn’t fix anything. Things are complicated because they CAN be. If there is room and power budget available for a feature, and people value that feature, the feature will be, regardless of complexity. How much complexity is there in having my phone respond to “hey, Google”? Quite a bit.

          1. Complex systems are like hydras with thousands heads. As soon as 1 is cut, another grows somewhere else.

            Features and software inverted pyramids, built with ever increasing levels of complexity, layer upon layer, brought us to current situation, I believe.

            No one understands the entire PC nowadays. Hardware and software. While back 30 years ago, that was still possible.

            I read somewhere that CIA or NSA used system 6 unix for many years, because they could fully understand the system.

Leave a Reply to BrightBlueJimCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.