Apple Kernel Code Vulnerability Affected All Devices

Another day, another vulnerability. Discovered by [Kevin Backhouse], CVE-2018-4407 is a particularly serious problem because it is present all throughout Apple’s product line, from the Macbook to the Apple Watch. The flaw is in the XNU kernel shared by all of these products.

This is a buffer overflow issue in the error handling for network packets. The kernel is expecting a fixed length of those packets but doesn’t check to prevent writing past the end of the buffer. The fact Apple’s XNU kernel powers all their products is remarkable, but issues like this are a reminder of the potential downside to that approach. Thanks to responsible disclosure, a patch was pushed out in September.

Anatomy of a Buffer Overflow

Buffer overflows aren’t new, but a reminder on what exactly is going on might be in order. In low level languages like C, the software designer is responsible for managing computer memory manually. They allocate memory, tagging a certain number of bytes for a given use. A buffer overflow is when the program writes more bytes into the memory location than are allocated, writing past the intended limit into parts of memory that are likely being used for a different purpose. In short, this overflow is written into memory that can contain other data or even executable code.

With a buffer overflow vulnerability, an attacker can write whatever code they wish to that out-of-bounds memory space, then manipulate the program to jump into that newly written code. This is referred to as arbitrary code execution. [Computerphile] has a great walk-through on buffer overflows and how they lead to code execution.

This Overflow Vulnerabilty Strikes Apple’s XNU Kernel

[Kevin] took the time to explain the issue he found in further depth. The vulnerability stems from the kernel code making an assumption about incoming packets. ICMP error messages are sent automatically in response to various network events. We’re probably most familiar with the “connection refused’ message, indicating a port closed by the firewall. These ICMP packets include the IP header of the packet that triggered the error. The XNU implementation of this process makes the assumption that the incoming packet will always have a header of the correct length, and copies that header into a buffer without first checking the length. A specially crafted packet can have a longer header, and this is the data that overflows the buffer.

Because of the role ICMP plays in communicating network status, a closed firewall isn’t enough to mitigate the attack. Even when sent to a closed port, the vulnerability can still trigger. Aside from updating to a patched OS release, the only mitigation is to run the macOS firewall in what it calls “stealth mode”. This mode doesn’t respond to pings, and more importantly, silently drops packets rather than sending ICMP error responses. This mitigation isn’t possible for watchOS and iOS devices.

The good news about the vulnerability is that a packet, malformed in this way, has little chance of being passed through a router at all. An attacker must be on the same physical network in order to send the malicious packet. The most likely attack vector, then, is the public WiFi at the local coffee shop.

Come back after the break for a demonstration of this attack in action.

So far, the vulnerability is only known to crash machines, as seen above. Because of the nature of the problem, it’s likely that this vulnerability will eventually be turned into a full code execution exploit. [Kevin] informed Apple of the issue privately, and they fixed the issue in September updates of macOS and iOS.

81 thoughts on “Apple Kernel Code Vulnerability Affected All Devices

  1. >>> So far, the vulnerability is only known to crash machines, as seen above
    If you can follow an exact sequence of events that will cause an OS to crash, that is the foot in the door to owning that machine. Depending on the OS it should get far more complex, you then probably need to predict the address space layout randomization (ASLR). But provided there is enough entropy in the randomness pool at boot time, that should be extremely difficult.

      1. In paper yes, but on an embedded system without a hardware source of high quality entropy the first few pages on initial power on, or even keys generated at the very first power on when the device is being installed can have issues.

    1. In this specific case the reason the machine’s crashing is because he literally is destroying the heap. There’s almost certainly a way to get remote code execution using this, which is why they’re calling it an RCE bug.

    2. i doubt that you can get into the machine because once a panic happens the cpu can no longer execute instructions

      easy way to find out is if you can have a movie or an mp3 playing and crash it the movie or mp3 stops.

      apple designed that panic explicitly to ensure the mac gets rebooted.

      in the old days on classic a type 11 system error was the equiv. and pushing the debugger button and entering g finder or go finder would allow you to get back to the desktop .

      by having the panic the way it is there is no way to break out of the panic and continue working the session was finished and you had to force a restart.

      1. Maybe my wording was unclear. I did not say that after a panic you could control the machine. What I was trying to say is that a panic is a indication that with the right sequence of bytes the panic can be avoided and the machine can be taken over. A panic is a indication that the OS has be jumped off the predicted railway tracks, it indicates that it should be possible to jump the train on to a new track before it crashes(panics).

  2. After all that, the vulnerability is a buffer overflow and not a stack overflow. The video is about a stack overflow.

    These are different things.

    A buffer is “first in first out” (FIFO) whereas a stack is “first in last out” (FILO).

    1. “A buffer is “first in first out” (FIFO) whereas a stack is “first in last out” (FILO).”…

      No! A buffer (computer memory) is a place where one piece of software or hardware builds data before passing it on to another piece of software or hardware. The whole point about a buffer is that the data is in an unstable state (only partly valid) until the builder of the buffer passes it on to the next piece of software or hardware. In contrast data in a FIFO is ***ALWAYS*** valid and data can be read from a FIFO asynchronously to being written to it. This is why FIFOs work very well in systems that have multiple asynchronous tasks talking to each other e.g. mainline and interrupt handlers.

      1. A buffer, say an old-fashioned printer buffer for the easiest example, has new data written at one end, and the oldest data read from the other. Of course it can loop round, but the principle remains.

        Data’s validity is a whole higher level. FIFO vs LIFO is the point he was making, and he was right, buffers are FIFO. Stacks are LIFO.

        Buffers and stacks can contain anything, “validity” might not even apply. Indeed on a machine level it certainly doesn’t, it’s something programmers need to worry about. Again, all the data in a printer buffer is valid. A buffer can be used to cope with the reading and the writing being done at different speeds.

        You’re thinking of “temporary storage”, perhaps. Whatever, it’s completely sideways to Rob’s point.

        1. >buffers are FIFO. Stacks are LIFO.

          I hope you also realize the word “buffer” and “queue” are not analogous
          Queue is FIFO. Buffer is just an allocated piece of memory designed to hold temporary data.
          There is no strict designation on how the data must be accessed or written, and it could simply be a randomly accessible char array.

        2. @Greenaum
          Rob has no point other than to make noise to attract attention.

          I’m sorry but the whole point about a buffer is that it allows two or more systems to operate on a given piece of data without interfering with each other. A simple example of this is TASK A puts data into a buffer and signals TASK B when it has finished, then TASK B uses the data in the buffer and signals TASK A when it has finished.

          Sometimes TASK A and B won’t signal each other but will instead use some kind of lock (maybe a semaphore or even a byte within the buffer).

          Sometimes TASK B will write a result back into the buffer before signalling TASK A that it has finished.

          The fundamental property of a buffer is that at some point part of it will be invalid to one or more systems that are using it (e.g. while TASK A is part way through writing to the buffer, TASK B sees an invalid buffer – it doesn’t know which parts are valid and which parts are not until TASK A says it has finished).

          A FIFO is a very special type of buffer which maintains information about which parts of the buffer are valid and which parts are not. It has the very useful property that one TASK can safely write to it while another TASK can read from it WITHOUT either TASK interfering with the other. This property only holds true where there is only ONE data PRODUCER and ONE data CONSUMER. When more than one data producer needs to write to a given FIFO then some kind of lock is required for the write end of the FIFO. When more that one data consumer needs to read from a given FIFO then some kind of lock is required for the read end of the FIFO. Often it is better to use multiple FIFOs without locks than have multiple TASKs try to shoehorn everything through one FIFO.

          A stack is a special type of TEMPORARY storage system. It’s primary advantage over other storage systems such as the HEAP is its incredibly low overhead when it comes to allocating and releasing memory for reuse. Although memory can be allocated from the stack and used as a buffer, the stack itself is NOT a buffer. Some people will argue that pushing parameters onto a stack before calling a function qualifies it as a buffer however is this really an example of needing to quickly allocate space somewhere before using it to hold the actual parameter needed by the function being called – and what better place than a very low overhead memory management system such as the stack.

        3. @Greenaum
          “Buffers and stacks can contain anything, “validity” might not even apply. Indeed on a machine level it certainly doesn’t, it’s something programmers need to worry about”…

          No this is not the case.

          Consider a simple program where a buffer contains only one 32 bit integer. It is updated in the mainline and acted upon by a interrupt handler. Now consider what happens when this is done on an 8 bit CPU where copying a simple 32 bit integer takes several instructions. If the interrupt is triggered while the integer is being updated part of it will be valid while part will not.

          Now consider a string that needs to written by an interrupt handler. If the interrupt is triggered while the string is being copied to the buffer, part of the buffer will be valid and part will not.

          Ok so instead of just a string let’s say the buffer holds the length of the string and the string itself. Now we can program the interrupt handler such that it does not output the string if the length is 0. This means that we need to set up the buffer in a particular order. We cannot set the length of the sting before copying the string to the buffer. And because we need to set the length as the last part of the operation you can see that there are clearly issues concerning the validity of the entire buffer and not just parts of it.

      2. A buffer doesn’t imply concurrent access and there is no rule how it is used, it can be fixed format, fifo, lifo whatever.
        A fifo isn’t always valid whatever your definition of valid is. It also doesn’t imply concurrent access.

        Typical internet crap.

        1. “A buffer doesn’t imply concurrent access”…
          Then it’s either not a real buffer or your definition of concurrent needs to be re-visited.

          Perhaps you are thinking along the lines that FUNC A creates a buffer, writes to it then calls FUNC B and passes the buffer to it, and that FUNC A has been suspended until FUNC B returns?

          This is just a shorthand way of TASK A creating a buffer, writing to it, creating TASK B, signalling TASK B that the buffer is ready for it and suspending until TASK B finishes, then TASK A releases the buffer.

          The function of the buffer is the same in both cases FUNC A / TASK A need to build a valid buffer before FUNC B / TASK B can use it and FUNC A / TASK A cannot use it while FUNC B / TASK B is using it.

          It is possible to implement a FIFO in such a way that it is NOT always valid. I have seen this done many many times. But this is like watching someone using a floating point number as a character index into a string. It is also possible to implement a FIFO in such a way that it is ALWAYS valid (without resorting to locks) when two concurrent tasks / threads / cores access it simultaneously (as in exactly the same time). The only restriction, as I have already mentioned, is that there is only one reader and one writer for any given FIFO.

          This is not “typical internet crap” this is real programming. Perhaps your view of buffers and FIFOs is based on some language or library?

        2. @[ospr3y]

          Hacking (the malicious type of hacking) is a whole new level up from a programmer.

          Your a “programmer” ospr3y, just as you state, you have absolutely no chance of being a hacker or even to comprehensively understand hacking (of the malicious kind).

          You have clearly demonstrated that you are way out of depth with understanding hacking and even so, way out of your league as your understanding goes no further than the abstractions you use to “program”. You have absolutely no understanding of what your code does at a hardware level because you don’t accurately understand how hardware “actually” works. If you don’t know this then you can’t hack because that is exactly what malicious hacking is about.

          Have a look at a description of Meltdown and Specter –

          Meltdown and Spectre. Vulnerabilities in modern computers leak passwords and sensitive data. Meltdown and Spectre exploit critical vulnerabilities in modern processors . These hardware vulnerabilities allow programs to steal data which is currently processed on the computer.

          People here have been trying to point of some facts about how code works at a hardware level. They have been using hardware terms like STACK, FIFO, LIFO, PUSH, POP, STACK POINTER as these existed first at a hardware level.

          In many abstractions the same terms have been reused and have a slightly similar meaning but are by no means the same, especially when it comes to hacking.

          In your last post you comment about functions. There are no such thing as functions at a hardware level. They don’t exist.

          At a hardware level you have subroutines. You call the subroutine with the CALL instruction and return from the subroutine with the RET instruction. There are no variables (other than binary registers) at a hardware level so the processor needs to know what address in memory to return to.

          This is accomplished by the CALL instruction placing the next op-code fetch address onto the STACK at the location indicated by the STACK POINTER. The STACK POINTER is then decremented The RET instruction then POPs the address back off the STACK at the location indicated by the STACK pointer back into the PROGRAM COUNTER and the STACK POINTER is then incremented.

          Subroutines can CALL other subroutines in a nested fashion so the RETurn address has to be the last address CALLed rather than the first address CALLed and that is why a STACK is LIFO. The STACK can and is used for lots of other things.

          You can interfere with this process (hack) and that is what the video is about. Unfortunately the video starts talking about a STACK overflow and ends up describing the effects of a STACK_POINTER overflow (or more specifically a binary value overflow that occurs when the STACK_POINTER is used in conjunction with an offset). These are not the same even though they are often used in conjunction.

          And since you had a go at me here is some of my history.

          I started programming with the likes of COBOL, FORTRAN and Pascal in the 1970’s. I quickly moved to Z80 machine code ASM. At the time my main interest was digital electronics. I qualified as an electronics engineer for both analogue and digital electronics but by then I had more interest in programming so I ventured strongly to micro-processors and later when they became available, micro-controllers. I have worked with all the architectures, Harvard, Von-Neumann, TTA, CISC, RISC, Orthogonal etc.

          I have programmed three and a half decades of micro-controllers. In that time have probably learnt 100 other languages (abstractions) that range through from micro-controllers, PLCs, through to web based programming like PHP, JavaScript. I have worked in network security, firstly LAN/WAN and then when we had an “internet”, web servers and this is the knowledge about hackers comes from. The other benefit of this experience is an understanding of many many protocols. From the AT commands that have existed since the days of 300 BAUD modems right though to modern internet protocols, TCP/IP, HTTP, IoT (there are many).

          Through my knowledge of hardware I have also become proficient at VHDL (Virtual Hardware Definition Language) where you code silicon “what to be” rather than “what to do”.

          If I were to tease I could ask you what your education and experience are but quite frankly, the limits of your knowledge are already quite obvious to me.

          I will give you one pro tip though –

          Here at Hackaday you never know who is hiding behind an innocuous screen name. There is a wealth of knowledge here and I have learnt a lot.

          So if you want to do as I do and learn from the wealth of knowledge here than a courteous approach is essential.

          No matter how much you know, there is always someone here that will dwarf your knowledge in their specialized area.

          To me that’s a good things as it’s a great opportunity for me to learn new things.

        3. @Rob
          I love the way you try to convince us that you are somehow some kind of genius. A real genius doesn’t need or try to convince anyone. You are so wrapped up in trying to show people how clever you are and how complicated things “really” are that you are unable to accept that “yes sometimes things really are simple”.

          “You have absolutely no understanding of what your code does at a hardware level, blah, blah, I have programmed three and a half decades of micro-controllers blah ,blah, blah, what your education and experience are, blah ,blah, blah”…

          Rob, you’re a troll and I wont bite.

        4. @[ospr3y]

          There you go again. No facts, just another character assassination attempt.

          It must be hard on you to feel so bad about yourself that you need to demean others to feel better about yourself.

          I have been here on hackaday for probably a decade now and I have a hackaday.io membership with several projects.

          Many people here know my skill-set.

          There is only one person who doesn’t believe me ospr3y.

          And there is only one person that supports your line ospr3y.

          Have you noticed that no one is chiming in to support your “theories”.

        5. @rob
          “There you go again. No facts, just another character assassination attempt.
          It must be hard on you to feel so bad about yourself that you need to demean others to feel better about yourself.”…

          What you feel demeaned?! someone of your great stature and expertise – surely not?!

          Rob, you keep trying to make this about me and my abilities. I don’t want to influence people by who I am or what I’ve accomplished I want them to understand what I say and decide for themselves if there is anything valid to learn. The fact is I jumped in when I saw you were talking SH!T about someone else’s presentation. Regardless of my technical competence the presentation is easy to follow and, provided the viewer accepts that the names of things mentioned are correct, everything follows. All you have done is confuse the issue in an attempt to elevate your own standing in this community.

          “There is only one person who doesn’t believe me ospr3y.”…

          That’s because I KNOW you’re talking SH!T (the others are probably just sitting back with a can in one hand and a bowl of popcorn in the other having a laugh as this unfolds).

          “Have you noticed that no one is chiming in to support your “theories”.”…

          I don’t see thousands of people jumping in and saying “hey Rob is right the video is crap that whole buffer overflow thing was a hoax”.

  3. I don’t understand why we’re still using C-style pointers and buffers. Haven’t we learned, through decades of vulnerabilities mostly based on this same inane programming practice, that the performance gains aren’t worth it? Maybe they were in the era of wire-wrapped backplanes clocked at sub-MHz speeds, when nothing important ran on computers.

    There are languages (most of us learned on one) which explicitly store the length of a string as a separate value, and explicitly check that it fits somewhere before trying to put it there. When better methods exist, what’s making us cling to the bad methods?

    This whole thread applies: https://hackaday.com/2018/09/22/one-mans-disenchantment-with-the-world-of-software/

    And I have it in my head that there was another article in the last few months, about programming languages being intentionally elitist, intentionally dangerous, intentionally obtuse, to feed programmers’ convictions that they are exceptional and powerful and can avoid all these traps that the language sets for them. I can’t find that link right now. Anyone?

    1. its not the language that is the problem its poor coders. The coders they pump out today are mostly useless.
      They should make it mandatory in schools to learn assembly first and code in that well, then and only then move them onto other languages. Then and only then do you truly understand how to write effective code.

      1. No.

        If the languages are so dangerous that human coders are incapable of writing good code, then the languages are not appropriate for humans to use. You’re demanding superhuman coders, which isn’t realistic.

        We’ve proven, over decades of experience and hundreds of thousands of security vulnerabilities, that superhuman coders are not realistic. Stop insisting that we just need better coders, and fix the languages instead.

          1. For those who are curious about a summary of his answer, his answer is basically “yes, Rust could do this but you lose the safety benefits.” But once you have that “escape hatch” (as he put it) you still have the overall problem: a coder has to know when to use the escape hatch, how to use it properly, and when it can be avoided. And that *still* is going to lead to subtle bugs. The problem is always going to be in the coder whenever you’re in ‘bare metal’ land.

            Of course… it’s even *worse* than all of that because at the bare metal level, *the processor isn’t even the only one accessing the buffers*. You could have a buffer overflow interaction leveraging a DMA engine, for instance, and no amount of language checking will ever help you there. Device access controls help somewhat there in controlling the attack surface, but it’s super hard to predict what smart hackers will manage to figure out.

        1. I think his point was that some programmers have no clue of the effect of their code and thus lead to unoptimized code.
          It’s like if you try to learn to a physicist physics without the math behind it. Sure he can learn it in the approximated way that physics work with, and he don’t need on a day to day basis the rigid math behind his formula, but it can help to understand the tool you are using.

          Every guy I know/knew that was really good at something, could explain me in detail how everything worked from top to bottom. I don’t say you have to be an expert in assembly to program any other language, that is another job, but know at least what you are doing. Don’t look at a computer as if it was a black box where you throw your program in and wait of a response.

          As for everything, there is an history behind the language and due to retro compatibility we always built on top of the last known technology, even if it is outdated, so there is probably room for improvement in this area too, but my knowledge are insufficient to have any opinion.

        2. “If the languages are so dangerous that human coders are incapable of writing good code, then the languages are not appropriate for humans to use. You’re demanding superhuman coders, which isn’t realistic.”…

          No, you don’t need “superhuman coders”. Look about you there is so much stuff out there that doesn’t crash and works well. This hasn’t been written by aliens. The problem is that good programmers only become good with a lot of dedication. They need to study, practice and be disciplined. Just learning to write a 10 line program in BASIC doesn’t make you (one) a programmer. Managers tend to treat programmers as interchangeable cogs so there is no incentive to do the job right – just hack it and get it done.

          “Stop insisting that we just need better coders, and fix the languages instead.”…

          It really isn’t the language that is the problem. Often (especially these days) programmers get a spec and just don’t get the big picture. The buffer overrun problem that this article is about focuses on the “overrun” and totally misses the fact that the original programmer didn’t even consider that it was possible to get corrupt packets when processing ICMP packets. The correct fix here is not fix this “one” problem but to ensure that such corrupted packets can’t get through in the first place.

          1. “The buffer overrun problem that this article is about focuses on the “overrun” and totally misses the fact that the original programmer didn’t even consider that it was possible to get corrupt packets when processing ICMP packets. ”

            Not trusting what one doesn’t control. The outside world is hostile. Paranoid programming mode on.

          2. Yeah but if you’ve got lots of programmers working on a system, where do you put the protection? You can’t expect every function to run every passed variable past a series of checks. It would be hugely wasteful. You just need someone in charge to decide to put the safety checks in at the appropriate places, and for everyone to what they can rely on.

            A lot of programming problems are really management problems, once you’re talking about medium-sized systems. I’m sure for insanely large stuff like phone networks they have their own entire paradigms. The US military originally invented ADA as a response to the risks of programming problems. Including actual software bugs, but not just that.

            This is one reason Linux is such a pain in the arse. The people doing the coding on systems don’t communicate nearly enough, and are usually managed completely ad-hoc.

            That said, important stuff like OSes should be left to superhuman programmers. And every one of them should speak asm. Though if you’re doing OS stuff I can’t imagine you wouldn’t know it. There’s a huge amount of common ground.

        3. People are not incapable, they are just not willing to learn how to design and avoid said pitfalls.
          We have perfect languages for writing and speaking, people abuse those also.

      2. This is more market forces than coders. Sure there are some crap coders out there. There are also some very good coders.

        It’s upper management that knows nothing about code that chooses who get the job and they aren’t willing to pay a little extra for quality code.

        1. To compound that we get ‘agile’ development cycles or other styles that focus on being first past the post while ignoring that their product is in flames as it passes the post.
          If you don’t have the time to do it right when will you have the time to do it again.

      3. Because overflows don’t exist in anything coded in assembly. Back in your day, when men were real men and real programs were punched out of cardboard, nothing ever went wrong and software was perfectly secure. Coding one instruction at a time makes it impossible to forget a boundary check, misunderstand a specification, or cut corners to rush something out the door.

        Sure, bud. Sure.

        1. Sure you can fuck up in asm, but a person who knows asm has a good understanding of systems and how it all fits together, how everything works. How everything *really* works!

      4. Agreed to point many of the newest generation have little to no understanding of what’s happening ona low level.
        But asm first that would be a very steep learning curve maybe go back to the 80s basic, then c and some asm and followed by moving to OO languages.
        Java and Javascript should never be first languages as they teach too many bad habits such as using an entire browser engine for a simple UI.

    2. “I don’t understand why we’re still using C-style pointers and buffers. Haven’t we learned, through decades of vulnerabilities mostly based on this same inane programming practice, that the performance gains aren’t worth it?”…

      Some stuff can be written using a very inefficient language that generates very slow executables and the end user doesn’t care. But other stuff is so fundamental and so heavily used that it needs to run as efficiently as possible. How would you feel if someone said “hey from tomorrow you wont be able to use the internet unless you upgrade you PC to one that runs 100 times faster”?

      1. ” How would you feel if someone said “hey from tomorrow you wont be able to use the internet unless you upgrade you PC to one that runs 100 times faster”?”

        Or stop using XP. ;-)

    3. “I don’t understand why we’re still using C-style pointers and buffers.”

      Ooh! Ooh! I know why!

      Because that’s how the processor *actually works*.

      Think about what you’re saying: yes, there are languages that check lengths and try to protect against things, but fundamentally *they* then need to convert those language primitives into addresses, buffers, etc. Which means the risk of a buffer overflow will always exist – if you protect against it in the language, the risk then moves into the compiler/runtime of the programming language. You’re not eliminating the attack surface, you’re just moving it.

      Now, you might say “yes, but that now means we only need to harden *1* thing – the compiler/runtime of the language,” which is true. But because the compiler/language would then need to become *everything* the computer does, it’d have to have tons of performance optimizations and differing prototypes. Casts, copies, indexing – all of those things could be *very* expensive performance-wise (especially for small objects, like network packets) without pointers.

      And once you have lots of “stupid code tricks” for manipulating things, there’s just no way that the language can protect against all the combinations by design. You’re just going to hit the same problem again.

      Your hypothetical “magic language” is presenting an *abstraction* of the way the code should run, but it won’t actually run that way. That disconnect will generate an attack surface, and it will be *very* hard to find because it’s an interaction between two black boxes: the code and the language. That’s the same reason why Spectre and Meltdown existed, and why so many other super-subtle bugs exist (like Rowhammer, etc.). Hiding the way the computer actually works from the programmer doesn’t protect against bugs, it just makes them harder to find.

      Note that I’m not arguing against “safer” languages. You just don’t want to use them everywhere. Kernels, specifically, can be super-dangerous places for languages like that, because they’re a layer that’s present for *every* code that runs on the device. So adding another black box (the language/runtime) to the system is another attack surface for *everything*.

        1. Makes a strong argument for verified and clean libraries that can be shared regardless of license (BSD ;-)). A network stack (as ubiquitous as they are) that has been gone over carefully, and changed infrequently (less “oh shiny” getting in).

          1. To be honest, I think actually the problem with network stacks is the fact that for *most* cases, we really should just be using dedicated hardware for protocol handling entirely. Imagine an ASIC that handles (only) TCP, ICMP, UDP, ARP, DHCP, etc. For the vast majority of consumers that would be completely fine (and then you could also have raw packet forwarding and a software stack for the few people who need custom protocols). If you’ve ever used a WIZnet IC for interfacing with a microcontroller, basically something like that on steroids.

            This isn’t a new thing, obviously, people have been thinking about a TCP offload engine for years for performance reasons, but I think it could have significant security benefits as well: again, it’s just the removal of an attack surface. You might find a bug in the ASIC that allows you to mess it up (but not gain control) with a corrupted packet, but the system remains unaffected (obviously you wouldn’t want a dedicated processor for this, that just moves the attack surface). TCP offload engines died an ignominious death because, well… pretty much everyone implemented it badly.

            The problem, of course, is that at this point the barrier to entry for something like that is so high that it basically has no chance (although who knows, it could possibly win on power savings for mobile devices if done properly).

          2. The magic in an ASIC implementation is that there’s nothing you can do *with* a bug. It wouldn’t be a generic processor. It might not operate quite right, but it can’t cause a security issue. You can’t execute code.

            I’ve got a UDP implementation for an FPGA which presents verified packets to outbound ports. Is it totally bug free? No, probably not: but there’s literally nothing that a bad packet can do to the downstream processors. Worst thing that would happen is the thing locks up, and a few microseconds later, it resets itself.

    4. I’ll start with this, most of our OS kernels are really old. They were written in the days before even the ISO9899:1999. In these times, there was not anything better.
      IIRC, the Apple kernel traces its linage through the BSD kernel which was first released in 1977.
      Linux got started in 1991.
      It’s hard to say how much of the Windows 10 kernel is still based on Windows NT, but consider that was released in 1993.

      The two big new OS kernels are Haiku and Fuchsia. Both look to be C++ after taking a peek at their repos, and the Wikipedia article on Fuchsia make it sound like it shares a lineage with Haiku (though not a fork).

      Besides that, there are still some good reasons to use C as it provides a reasonable alternative to assembly. The compiler could do more and it in fact can do more in a number of cases. Part of the issue is that legacy code thing and the sheer amount of code out there.
      But that would not have helped in the case of the exploit disclosed here because the length field being checked was being provided from outside the program.

      Could the compiler create runtime checks? Sure and I have seen compilers with a debug mode that generates runtime checks that could catch issues like this. Those checks significantly impact runtime and that performance really does matter for the kernel. Just look at the impact mitigating Meltdown and Specter have at the kernel level.

      Something else that can help are using coding guidelines such as MISRA C and SEI CERT in concert with static analysis tools to help catch errors in the code.

      As for why C, there are still valid reasons that usually don’t become apparent until someone has tried to do low level work in a higher level language.

    5. We are talking about a kernel driver. IMHO having a bug here is not too different from saying, oops, in my high end compiler that explicitly checks the length of strings forgets to check the length of a string destination in some weird scenario. The carelessness which allowed such a bug to go through should not have happened in the first place.

      Second of all, on the kernel layer (especially for networking), it’s likely there are actual genuine performance hits when you store information you don’t need to store or do boundary checks which you don’t need to do (perhaps because your structure when designed correctly does not require them). In fact, doing some of this unnecessary checking may in some instances worsen or allow for DoS attacks, especially when you’re talking about a networking stack. You probably want a language with the flexibility to force the compiler to not always operate with these constructs – and once your language has the flexibility to use pointers, well, it’s about as dangerous as using C if you’re not careful, so is it really worth it?

      I think if anyone who’s written something in a high level language and found the performance lacking (probably not a lot of web applications or scripts) must understand that these “unsafe” constructs are not entirely inane. More often than not the answer is a rewrite in a language with those unsafe, unchecked constructs accessible. when appropriate. This certainly applies in a networking stack where taking extra time could mean increased vulnerability to perhaps some DoS attacks, slower performance (when it needs to be compatible with even the fastest server networking interfaces), and reduced battery life in portable devices. These losses can be pretty nontrivial.

      Ultimately, I don’t think this is a simple problem and it really isn’t people being elitist about their code. A sufficiently good language and compiler do not exist yet where you can communicate exactly what you want to do in a way that allows compilers to know how to perfectly optimize that code in a meaningful way on most platforms. An approach some people take is to use an inherently unsafe language like C, and use code analysis tools and perhaps extra-language markers to try to figure out when it’s doing genuinely unsafe tasks.

    6. “Haven’t we learned, through decades of vulnerabilities mostly based on this same inane programming practice, that the performance gains aren’t worth it?”

      Maybe TVTropes should have a “Coding Tropes” section? Buffer Overflows certainly qualifies.

    1. I did watch the video. It shows very clearly several times that the write direction is from right to left and then goes on to say that progressive write will overwrite the return address which is in the right side.

      RE: Buffer. If a buffer was FILO then “Hello World” would read out as “dlroW olleH”. It has to be one or the other.

      I am not talking shit. I have an extensive history with ASM so I know exactly how this works at a hardware level. The confusion here is that the article is about a buffer overflow and the video is about a stack overflow and these are completely different things as a buffer is a software abstraction and a stack is a hardware implementation.

      1. “the article is about a buffer overflow and the video is about a stack overflow and these are completely different things”
        Not entirely true. A stack overflow is generally also a buffer overflow– It’s a buffer that overflows and smashes the stack. Not every buffer overflow is a stack overflow, it depends on where that buffer is allocated in memory.

        In this case, the packet data is being copied into a struct statically allocated at the top of the function. Statically allocated memory is generally part of the stack. The struct is a fixed size, while the data being copied into it can be manipulated by an attacker.

        As to the direction of growth, this is a tricky one to understand. The stack grows downward (Right to left) as functions are called. However, arrays themselves always grow upward(left to right). This is important for pointer arithmetic, among other reasons. An array can exist anywhere within the function’s stack, but within that array, the pointer value increases upward.

        The wikipedia pages on buffer overflow and stack buffer overflow are quite helpful in getting this one right. In any case, the code in question in this article has a buffer overflow vulnerability, and that buffer is almost certainly on the stack. Overflowing it likely leads to a stack overflow, overwriting the return address, which is what crashes the kernel.

    2. Ya can’t tell that guy nothing. It is his MO to misunderstand videos on quite a regular basis around here. That is why we keep him around. It facilitates pointless discussion.
      Kernel fault-everybody panic ????

      1. Well that’s interesting! I notice you have not offered any information to support you claim whatsoever. Instead you entirely resort to a character assassination rather then any intelligent conversation.

        Even with people like you I often respond in a courteous manner.

        However today, as you have resorted to such tactics, I will make an exception as I have lost patience with some of the imbeciles that comment here. Thankfully, most are not.

        I was not disputing much of the information offered by others here. I was adding some information that run of the mill, nose to the grinding wheel coders who code in higher level languages like C and it’s variants often overlook.

        I worked in network security for LAN and WAN (before we even had an internet) and then web server security for several decades so I have seen the best work of real hackers and I am not talking about your script-kiddies that work with VB script and other string like languages.

        Real hackers, not members here but hackers of the malicious type know code far better than your Arduino sketch coder or even most who work with C etc.

        And sure you can have “call stacks”, “process stacks” in an abstraction (like C). I you want you can write you own stack and call it what ever you like!

        Then there are hardware stacks that exist at the CPU level. You will find that most stack abstractions are based on how these hardware stacks work.

        Would you be surprised to know that the ASM mnemonics for transferring data to and from a hardware stack are PUSH and POP? Why do you thinks such things also exist in your favorite version of C and a hundred other languages?

        Here is a picture of the register set of an x86 CPU –
        https://i.stack.imgur.com/M2kTF.png

        Do you notice that ESP register is a STACK POINTER?

        Can you see a “CALL STACK POINTER” or a “BUFFER POINTER” or “STACK BUFFER POINTER”?

        No? Well that’s probably because the above are abstractions and not hardware.

        And just in case you think this is a new thing –

        Take a look at the first CPU used by an Apple computer that was made in 1977
        https://upload.wikimedia.org/wikipedia/commons/1/1b/MC6800_Processor_Diagram.png

        Or even a Z80 from 1976
        https://upload.wikimedia.org/wikipedia/commons/d/db/Z80_arch.svg
        In this picture the SP register is the “Stack Pointer”.

        In fact I have seen a CISC architecture that didn’t have a STACK POINTER or a way to implement a hardware STACK.

        So to all those who think that I must be wrong because they know everything about CALL STACK, BUFFERS, and BUFFER STACKS, I say well perhaps you don’t EVERYTHING after all.

        1. Look at the line of code causing the buffer overflow that’s mentioned in the article. The target of the m_copydata function is a pointer to an unstructured buffer. How the function compiles into its target architecture is irrelevant of the discussion, since the crash is ultimately caused by writing out of bound to an unstructured buffer.

          You need to understand that the word “buffer” (unstructured) and “queue” (FIFO) are not analogous.

          Do everyone a favor and get off your high horse, and stop being so pretentious and arrogant. If you truly are an professional like you’ve described, then you should know better before arguing pointlessly about semantics over the internet with random strangers.

        2. @[binexec]

          I see that your brain is in reverse gear and you mouth is in overdrive.

          If you had read the article an looked at the linked video then you would have noticed that the video is distinctly about stack overflows and was not at all relevant to the vulnerability.

          If you then had read my previous comments then you would understand that I was simply pointing that out and was not in any way arguing with anyone.

          Ironically you are now arguing with me about sometime that I wasn’t arguing about lol

        3. @rob
          thank you for all the lovely words and diagrams. However, given the picture you have already painted of your technical competence, I feel it would be a waste of time to try to understand what you have presented and in anyway attempt to teach you anything. The youtube video which this article refers to explains the situation very clearly. The presenter has gone to a lot of trouble to do this. You clearly have an overinflated opinion of your software skills and I can only hope and pray that I never need to rely upon anything you have worked on.

        4. @[ospr3y]

          Once again you clearly demonstrate the problem. I am pointing out a hardware based issue and you are ranting on about software.

          One thing that has come up in this forum is a question as to why, in these modern times, we have these security issues in modern code.

          The answer to this is quite clear to me now.

          It’s because coders that have religious like beliefs in themselves, thinking they’re a god of coding while having absolutely no idea whatsoever what their code is doing at a hardware level.

          Oh and once again your comment presenting nothing more than “the youtube video is clear” and not bothering to explain what aspect you are referring to. Could that be because you really don’t have a clue? What’s your ASM experience? Arduino perhaps?

        5. @Rob,
          “Oh and once again your comment presenting nothing more than “the youtube video is clear” and not bothering to explain what aspect you are referring to. “…

          The presenter in the youtube video has explained the situation very clearly. If you don’t (or perhaps are unwilling to) understand what he has shown how can I hope to make it any clearer without putting in more time and effort than he has. It’s like arguing with a flatearther – I am just not prepared to waste my time. Learn if you want to, make noise if you must.

          “Could that be because you really don’t have a clue? What’s your ASM experience? Arduino perhaps?”…

          Whatever makes you happy, this is the sound of me caring …

        6. @[ospr3y]

          I notice that you still declined to clarify your point.

          You offered absolutely no response when asked what your ASM experience is.

          You choose personal character attacks over presenting information as a method of “arguing”.

          Personally, I think you’re full of s**t.

        7. @Rob
          “I notice that you still declined to clarify your point. “…
          The world is round.

          “You offered absolutely no response when asked what your ASM experience is.”…
          That’s like a rock asking you to prove you’re a mountain – why bother.

          “You choose personal character attacks over presenting information as a method of “arguing”.”…
          Why wont you get it through your thick skull – I don’t consider you worth arguing with!

          “Personally, I think you’re full of s**t.”…
          Oh, that hurts so much. Your opinion means so much to me :-)

        1. Humorously stating facts is not an attack. Do you actually read HaD comments on a regular basis? It is pretty much all about ignoring the article and just stating random experiences from products several generations before. It is the only sense of community on the blog. IO is different. I am actually quite supportive of the up n comers with my other handle. It is the 5 angry grey beards that constantly post but keep this from being a learning forum. This place was very different 10-12 years ago.

  4. Seriously? An overflow vulnerability in this day and age? Have the coders been given no training on how to handle external data? Are there no scrutiny checks for data handling?

    In the ’90s age of naivety it would be reasonable, but today *on a network stack* that’s just bad practice and bad processes.

    1. Having been working in an environment adopting MISRA C and SEI CERT coding guidelines (along with static analysis tools), I agree.
      Just allocating a local for taking data in from an external interface is a terrible idea.

    2. Terrible code, it’s true. Then again, on the hardware side I’ve been complaining about the lack of Decoupling capacitors for 20 years. Everyone either hears the warning, or makes the mistake. New coders will repeat old mistakes. One thing that may have changed, is that in the past you spent more time with your project. Errors that occurred were cleaned up by those that made them. Nowadays it’s rent a coder, dump a coder, and when the bug finally rears it’s head, some new guy will be sent in to clean it up. The best learned lessons are when we clean up after ourselves.

    1. I once worked with a guy that was clearly having problems with his project. there was cursing, jumping up out of his chair and storming off, there was slamming of notebooks on desks. after a few hours of this I offered to help. he had been trying to debug some code he had written using a system monitor. he explained that every time he uploaded his executable it would behave strangely. I asked him to show me exactly what he was doing and we went through the whole process. built the code, uploaded it, entered some test data, ran it, inspected the RAM. “there” he said “see, it should be this…” “Ok” I said “lets check your test data” and sure enough the test data had mysteriously changed. Now the funny thing was that it was out by what looked to be the difference between ASCII upper and lower case. So I suggested there might be a problem with the system monitor and it might be touchy about which case hex was entered in. It took this guy three days to finally declare to the world that he had fixed the problem. And guess what, it was exactly as I had predicted…

      I must admit I’m kind of glad when I see a backseat dev posting on a forum – at least it means he’s taking a break from hacking code (and I mean that in the most derogatory sense possible).

  5. I think this shows why it’s probably not a good idea to use common code base for everything.
    Maybe if the Apple watch had a super efficient OS like QNX or TRON it might actually have a useful battery life.

    1. While I’m a fan of microkernel in general and QNX in particular that’s not exactly true – QNX have higher overheads than many other systems by design. It uses synchronous message passing with data copying so normal communication have a context switch and copying overheads (sender -> kernel copy -> receiver), a monolithic system can often avoid copying data and generally use a cheaper user kernel mode switch.

  6. what nobody mentions is how a hijacked account that takes over a device is easily able to set up a pairing and instigate a network to which becomes shared thus making many of the items you all mention very easy to execute.

    im no dev or programer but this is exactly what i am subject too and ironically came across the reporter just the other day, to also seems to be in collabaration with one possible party responsible

Leave a Reply to ospr3yCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.