Things Are Getting Rusty In Kernel Land

There is gathering momentum around the idea of adding Rust to the Linux kernel. Why exactly is that a big deal, and what does this mean for the rest of us? The Linux kernel has been just C and assembly for its entire lifetime. A big project like the kernel has a great deal of shared tooling around making its languages work, so adding another one is quite an undertaking. There’s also the project culture developed around the language choice. So why exactly are the grey-beards of kernel development even entertaining the idea of adding Rust? To answer in a single line, it’s because C was designed in 1971, to run on the minicomputers at Bell Labs. If you want to shoot yourself in the foot, C will hand you the loaded firearm.

On the other hand, if you want to write a kernel, C is a great language for doing low-level coding. Direct memory access? Yep. Inline assembly? Sure. Runs directly on the metal, with no garbage collection or virtual machines in the way? Absolutely. But all the things that make C great for kernel programming also make C dangerous for kernel programming.

Now I hear your collective keyboards clacking in consternation: “It’s possible to write safe C code!” Yes, yes it is possible. It’s just very easy to mess up, and when you mess up in a kernel, you have security vulnerabilities. There’s also some things that are objectively terrible about C, like undefined behavior. C compilers do their best to do the right thing with cursed code like i++ + i++; or a[i] = i++;. But that’s almost certainly not going to do what you want it to, and even worse, it may sometimes do the right thing.

Rust seems to be gaining popularity. There are some ambitious projects out there, like rewriting coreutils in Rust. Many other standard applications are getting a Rust rewrite. It’s fairly inevitable that the collection of Rust developers started to ask, could we invade the kernel next? This was pitched for a Linux Plumbers Conference, and the mailing list response was cautiously optimistic. If Rust could be added without breaking things, and without losing the very things that makes Rust useful, then yes it would be interesting.

Why Rust

So what makes Rust so interesting? There are two main answers here. First, it’s a modern language with a strong memory-safety guarantee. (There’s a caveat here, and we’ll cover unsafe code later.) Something around two thirds of all security vulnerabilities are a result of memory handling bugs, and Rust pretty much eliminates those. A second bonus, Rust has some of the niceties we’ve come to appreciate in modern languages, like an easy-to-use STRING type built-in to the standard library, and some handy functions for common scenarios like string comparison.

The other answer is that Rust is an easy fit with C code and kernel programming. Rust does it’s magic in the compiler. The code you write is what actually runs, without an interpreter or garbage collection trying to be helpful. Rust hasn’t overdosed on Object Oriented patterns, but meshes nicely with the C-style structs already used in the kernel. Even the stack model is very similar to C.

There’s one problem with Rust’s memory-safe guarantee — it’s impossible to write a kernel that is formally memory-safe. A kernel has to write to unallocated memory, do weird pointer math, and other seemingly bizarre things to actually make our computers work. This doesn’t work well with a language that tries to guarantee that memory manipulations are safe. How do you write kernel code with Rust, then? Rust has added the unsafe keyword, allowing use of direct memory access and other such techniques that don’t work with Rusts’s memory guarantees. Keep the potential problems together, and it makes auditing easier.

There’s at least one other language that may come to mind as an incremental update to C that tries to do some of these things: C++. Surely this would have been even a better fit, right? Kernel devs have some strong feelings about that idea. To put it gently, none of the improvements in C++ are useful in the context of the kernel, and some of the other changes just get in the way.

What’s the Plan?

So are we about to see the kernel completely rewritten in Rust? Not likely. The kernel development process is painstakingly conservative, so the initial introduction of Rust is going to be done in the least obtrusive way possible — driver code. As kernel second-in-command [Greg Kroah-Hartman] put it, “drivers are probably the first place for an attempt like this as they are the ‘end leafs’ of the tree of dependencies in the kernel source. They depend on core kernel functionality, but nothing depends on them.”

In practice, this would mean that tooling, documentation, and example code would be merged into the kernel right away. At some point in the future, one of the interested parties, like Google, would start writing new drivers in Rust. Google seems to be very interested in converting parts of Android to Rust, likely in an attempt to thwart the continued pwnage of their OS from the likes of the NSO group. There’s a useful example driver in Rust on the Google Security Blog. Another interesting connection is that [Miguel Ojeda], lead developer of the Rust for Linux effort, is now employed full time by Prossimo for that purpose. Prossimo is an arm of the Internet Security Research Group, which is also famous for leading Let’s Encrypt. Funding for [Ojeda]’s work was provided by Google.

So where are we now? Version 6 of the Rust patches were just sent to the kernel mailing list. There have been a couple of very minor change requests, but most notably developers have begun calling for the patches to be pulled into the 5.19 kernel once its merge window opens. 5.18-rc6 was just released, so in two to three weeks we should see that kernel mint a final release, and the 5.19 merge window open. That’s right, there’s a very good chance we’ll see Rust added to the Linux kernel in about three weeks!

Once it finally lands, expect to see a simple driver that actually makes use of the support in the following version. So if 5.19 sees Rust support, a driver written in rust will probably happen in 5.20. As hinted at above, Google is one of the very interested parties in the Rust for Linux effort. It’s likely that some Android related code will be ported to Rust, as a part of Google’s continual effort to improve the security of their mobile ecosystem.

What Could Possibly Go Wrong?

Rust in Linux is almost certainly going to happen, but is that guaranteed to be a good thing? There are a few possible downsides to consider. First off, the interface between C and Rust is a likely place for unanticipated bugs to crop up. It’s new code, some of it generated automatically, doing something novel — there will certainly be surprises. That’s not really any more of a problem than any other new code. Bugs get fixed, problems get ironed out.

What may be more of an issue is the added complexity of debugging problems when there is another language to consider. Until now, the kernel has enjoyed the advantage that it’s all in C and all the programmers working on it are familiar with that language. Add a second language, and now there’s C programmers, Rust programmers, and the few that are actually proficient in both. There’s yet another compiler that could possibly introduce errors, and another toolchain to manage.

Lastly, there’s the danger that it just doesn’t catch on. It may be that the kernel community collectively shrugs, and goes on writing code in C, and the Rust support bit-rots. This is the least problematic issue, because the backing of big players like Google make this unlikely, and even if Rust dies on the vine, it’s easy to remove the code.

Are any of the above issues likely to be deal breakers? Probably not. The addition of Rust will change the way kernel development happens a bit, and kernel maintainers will have to brush up on their Rust knowledge. The potential benefits seem to outweigh the downsides. Torvalds seems to have accepted the idea of Rust in the kernel, once the last few wrinkles are ironed out. We’re looking forward to seeing Rust in Linux mature, and we’ll bring you the rest of the story once that happens.

159 thoughts on “Things Are Getting Rusty In Kernel Land

      1. I second this opinion to be fair.

        I have seen a lot of programmers make complex code for the sake of complexity, usually proclaiming that it is more flexible, but in a lot of situations the added flexibility isn’t useful, and most often comes packed with bugs and security issues.

      2. In my experience there are three stages in the evolution of a programmer:

        Stage 1) They are just happy that the program works
        Stage 2) Not only does it have to work, but it must be “clever”
        Stage 3) They realize that the best way to write a program is to make sure that it is clear and simple (the KISS principle).

        Unfortunately, most of the programmers I know never made it past stage 2.
        If you keep things simple, there is a far better chance you will be able to figure out what a program does years after the fact (or in my case, months after…)

        1. 1…3 become increasingly subjective. What is simple to one is overly clever to another while practically everyone using the end result will like that it works.

          1. Yeah… “i++ + i++” is a *very* “simple” way to express, well, whatever it expresses. 😛 Or rather, a very concise way. You’d probably need like six lines of less-cursed code to do that long-form. Deleting six lines of code will definitely tickle the neurons of a lot of programmers who are trying to KISS-up their code.

            For very advanced language lawyers it might even be more readable, just because there’s only one statement to read instead of a bunch to synthesize together… Just pity the poor mortal, workaday programmer who comes along after them. This is why maintaining a great code base takes social skills as well as technical ones; you’ve got to be able to model the effects of what you’re writing on other people with different experience levels.

          2. I propose the RW standard….

            …. anything I can’t mumble my way through screwing up my face once in a while for a second or two until it clicks, is spaghetti code.

          3. @Nentuaboy

            i++ + i++ does not replace 6 lines of code. Instead (if I understand correctly) it is a complex version of either

            2*i + 3;
            i += 2;

            or

            2*i;
            i += 2;

            which is much more readable and just one line longer than i++ + i++.

    1. I do dislike code that may be clever but is obtuse when trying to figure out what it does later. Clarity over clever anytime unless there is a ‘very very’ good justification for the cleverness. When I see some say “look at me! I did that all in just one line of code”, I roll my eyes.

      As for Rust, well they are moving the error checking/monitoring to the compiler which means we have to ‘trust’ the new mountain of code in the compiler now rather than just find the problem(s) in the application. I am still of mixed mind on that. Professional Computer Science trained programmers shouldn’t have to be baby sat…. In my opinion.

      1. “As for Rust, well they are moving the error checking/monitoring to the compiler ”

        That already exists in C. That’s what all those warnings-to-errors options and turning on “strict” options in C is for. You’re asking the compiler to help you ensure that your program will work. C is just spectacularly bad at it.

        “Professional Computer Science trained programmers shouldn’t have to be baby sat…. In my opinion.”

        If a program’s big enough for one person to handle it, sure. Two, probably still. But when you have hundreds of developers, you’re not babysitting the *programmers* you’re babysitting the *project*. The intersection of hundreds of good programmers does not necessarily result in good code.

        1. Even in a one person program its pretty likely to get similar errors as soon as its getting complex – you will end up having to focus on the complex high level bit of a function branch and something little in there is missed.

          So I’d argue even the smartest greatest single programmer does need babysitting – We all need that compiler error to say ‘Are you sure buddy, this doesn’t look right to me?’, and need it even more so when dealing with such high performance CPU that you no longer need to optimise for each single cycle where throwing in some sloppy memory handling to make it run fast enough to be useful is actually required, but are programing with library after library of code snippets you can’t be certain of, interfacing to hardware API’s you have even less faith in as you generally can’t even see the code behind them if you want to…

      2. I think I can weigh in on this.
        I recently learned Rust and started using it in embedded projects. I found that I don’t feel like I’m being babysat at all, in the sense that the language is “doing the hard stuff for me” or whatever. When writing in Rust, you still need to think about memory safety and you still need to be smart about structuring your code. When going back to C, I realized that Rust had made me a better programmer.

        When you mention “mountains of code in the compiler”, it sounds like you think the Rust compiler is some fancy AI that tries to detect memory safety issues. It’s actually quite the opposite. *C* compilers will try to analyze your code and guess at whether you’ve messed up. In Rust, memory safety is guaranteed through the design of the *language* and the compiler is relatively simple.

        1. “It’s actually quite the opposite. *C* compilers will try to analyze your code and guess at whether you’ve messed up. In Rust, memory safety is guaranteed through the design of the *language* and the compiler is relatively simple.”

          This really isn’t true. The Rust compiler is much more complex than a C compiler and part of that complexity is due to the borrow checker.

      3. Humans make mistakes, period. Even the smartest and most experienced programmer will make mistakes every once in a while. If there’s a tool to automatically prevent at least some of the mistakes before it’s put to the real-world test, there’s no reason to not implement it.

        1. That’s a terrible example because, while it’s certainly clever, it wasn’t cleverness for its own sake, but rather to perform a common expensive operation quickly. It made game play smoother.

        2. Fast Inverse Square Root is state of the art, the developper didn’t forgot his code was running on a boolean algebra machine and did not cry cause he had the required math level.
          If you blame this code you’re not a full fledged developer. Kernel programming is another level.
          Maybe Rust is the way you should go but really C++ is far superior concernig low level programming and error checking.
          Addind Concepts and templates to the mix, nothing can beat that.

          I don’t see the point concerning unsafe programming, human can make mistakes, the langage is not to be blamed. Same mistakes can be made in Rust I think.

      4. “As for Rust, well they are moving the error checking/monitoring to the compiler which means we have to ‘trust’ the new mountain of code in the compiler now rather than just find the problem(s) in the application.”

        This is a fundamental misunderstanding, or perhaps several fundamental misunderstandings. The compiler is doing checking that can only be done in C at runtime and in practice only gets done after the horse leaves the barn; nothing is being “moved”. And this “new mountain of code” is scrupulously tested and is applied across many codebases, not just the kernel.

        “Professional Computer Science trained programmers shouldn’t have to be baby sat…”

        This is a fundamental error in reasoning that I see in many areas outside of programming. “should” is a value judgment that can be translated as “I would prefer this to be so”; it has no bearing on what is actually the case, which is that programming is very complex and error prone even for the best programmers and most programmers are far from the best.

        . And “baby sat” is loaded language that is not an honest characterization of what the Rust compiler does. Decades ago I would see a lot of this foolish “manly man programmers don’t babysitting” nonsense; fortunately it’s becoming less common but a lot of old C programmers still cling to it. (I myself am an old C programmer, starting in the mid-70’s and having been on the C language standards committee, but I’ve learned new tricks.)

        And Computer Science is an academic field that doesn’t train people to program, but one might hope that it would teach them about the advantages of strong types, contracts and invariant checking, escape analysis, and the sorts of automated tests that modern languages like Rust provide.

        1. Here here! The profession of programming needs to let these “only bad programmers do this…” memes die if we’re going to leave “dark arts” and move into engineering practice.

          We need to understand that our tools have a huge impact on the quality of the output product. We need to put maximum effort into upgrading our practice to “safest + best fit + well tested” instead of “it compiled and I doesn’t crash”.

          1. The automotive industry has the MISRA C standards that define best practices for coding in C for software embedded in cars. It’s a pretty good starting point, for how not to use C when you care about reliability.

      5. If you haven’t programmed in Rust, it probably smells of Valgrind’s correctness, and well… Valgrind is a great tool but still on par with boiled spinach with regard to joy. Rust starts by making one angry – what do you mean the way I was taught to program five years ago at university was wrong and insecure? But after a time rust’s error messages start to seem mostly like very good advice with the occasionally fail for lack of mind reading. The feel is more like pair programming with a junior associate who’s actually increasing your output – not so much like baby sitting or being baby sat. And the trust is there because the code runs well and is largely free of broad classes of debug headaches.
        I’m no fan of “must rewrite everything in rust because we could find and fix issues we don’t even know we have.” Rust rewrite is no better than big money spent on lotto tickets. But as far as “only C ever for Linux” – that attitude has went from unfashionable to unfortunate and could even hold linux back.

        1. And it still generates code for it. In other words, to *prevent* undefined behavior from actually being in a program, you would need to convert those warnings to errors.

          Which, for those examples, is fine. No one would want those things. The problem is that there are *other* undefined behavior examples where if you do convert them to errors, it prevents you from doing totally legitimate things. In some cases *necessary* things.

          Rust’s stance on undefined behavior is a lot clearer.

    2. Those examples are obviously terrible examples of undefined behavior, but there are less obvious ones that are more used. The C standard is pretty terrible about this stuff: as Linus has said, having undefined behavior in a language just isn’t a good idea, at all. It just leaves things open for compilers to overwrite the standard in the first place, in which case there’s no point for the standard.

      The classic example is dereferencing an incompatibly-typed pointer.

      1. They don’t have to be obviously wrong. e.g. most people assume that numbers are 2s-complement. Which they always are, these days, but they don’t have to be, and compilers can take advantage of this for optimisations — for example, a compiler can e.g. perfectly legally optimise away a signed shift operation of 2^31 << 1 because it knows the result is undefined, and therefore any random value will do. Even though a human will expect the result to be 0.

        Another surprising piece of UB is that pointers can only be magnitude-compared when they're pointing at the same object. e.g. `{ int a; int b; bool c = &a < &b; }` invokes undefined behaviour.

        It's also worth pointing out that even though C gets all the blame when talking about undefined behaviour, C++ has _all the same behaviours_…

          1. I disagree. If a language has no UB, the compiler has more knowledge and can aggressively optimize your code more. Unlike languages with UB, where it has to be conservative with its optimizations.

          2. Nope. Jaded is right – an “undefined” state is one the compiled program (and therefore the compiler) doesn’t have to test for. It’s actually the programmer’s job to ensure they don’t allow cases where the behavior is undefined. Sure, it is POSSIBLE for a compiler to eliminate redundant bounds checks, but the programmer should already know if the range of a variable is guaranteed.

          3. Well, he’s right in that UB can sometimes allow more optimisations. But history has shown us that it’s not a worth it because programmers can’t write programs that are reliably free of UB (even with things like ubsan). And I don’t think the kinds of optimisations that UB allows are particularly important on modern processors.

      2. “Nope. Jaded is right”

        He isn’t right about the things that people disagreed with him about.

        “an “undefined” state is one the compiled program (and therefore the compiler) doesn’t have to test for.”

        No one said otherwise.

        “It’s actually the programmer’s job to ensure they don’t allow cases where the behavior is undefined. Sure, it is POSSIBLE for a compiler to eliminate redundant bounds checks, but the programmer should already know if the range of a variable is guaranteed.”

        This deeply unintelligent notion is why we have so much unreliable software.

        1. “Deeply unintelligent notion”. Well, here’s a deeper, even less intelligent notion: C was meant to be as low level as possible, just high enough to allow it to be portable across different instruction set architectures. So at the next lower level, assembly language, would you expect the assembler to make it impossible for the programmer to fail to know whether or not his data is within the range that an algorithm will correctly process? Of course not. So at least at SOME level, I hope you will admit that the programmer is responsible for what he is telling the computer to do. But suddenly, at the next level higher, it seems like you expect the compiler to be the nanny. THAT is a deeply unintelligent notion, because at every level of abstraction we trade some loss in efficiency for a greater degree of ease and safety. If you want to operate at the level where you say, out loud, “computer, solve my problem”, and have it do so without any side effects, then you want to not be a programmer. And that’s your choice, but this does not belong in kernel code.

          1. ” it seems like you expect the compiler to be the nanny. THAT is a deeply unintelligent notion,”

            Yes, your comment is again exactly that.

          2. Fortunately, people who are not profoundly stupid are busy designing and using programming languages that have the same power and access to the machine as C but with automated safety mechanisms that only dishonest imbeciles call “babysitting”.

            Over and out.

          3. “at every level of abstraction we trade some loss in efficiency for a greater degree of ease and safety. ”

            We aren’t talking about levels of abstraction that lose efficiency–this is profoundly ignorant.

            “If you want to operate at the level where you say, out loud, “computer, solve my problem”, and have it do so without any side effects, then you want to not be a programmer. And that’s your choice, but this does not belong in kernel code.”

            And this hyperbole is profoundly ignorant, stupid, and dishonest. As with your comments about parentheses–which DO NOT CREATE SEQUENCE POINTS–you demonstrate over and over that you have no idea what you’re talking about.

    3. In almost every case, I will put the “i++” stuff in parenthesis or even in a separate line. If I have to look up the order of precedence, that means I won’t know what it is when I look at it again later.

      1. Putting it in parentheses changes nothing; that’s still UB. This isn’t about precedence but rather about the order of performance of operations with side effects.

          1. Fine, don’t use any C compiler. The less programming done by someone who has no idea what they are talking about, the better off we all are.

        1. To be clear, SOME undefined behavior is in the way of undefined order of precedence. This is an area where the programmer must take preventive action – by ensuring that they do not make use of order of preference that is not well specified. In these cases, parentheses DO change things.

    4. ” a[i] = i++;” is a *simplified example* of the sort of code that people can write in practice when they make assumptions about the order of operations that don’t hold in C but do hold in some other languages like Java. Talking about people looking clever completely misses the point and is actually a bit of virtue signaling that is closer to what it’s criticizing by being an exercise in blaming some fictional person for having an unpleasant personality trait when the point is to prevent the possibility of code with undefined behavior that can cause software failures.

      1. The code does two things: incrementing a variable and storing a value in memory. These are two actions. In a sequential programming language, it’s not crazy to write this in two lines.

        a[i] = i + 1;
        i++;
        

        It’s also clearer about what’s in memory where. And it no longer matters whether you use i++ or ++i.

        Have a look at the “horrible UB” examples in C, and see how many of them are cases of the coder trying to do two things at once for some imaginary gain, at the expense of readability and exposure to language lawyering.

        1. I see the problem. The expression “i++” is two separate operations that happen at different times, but the syntax makes it appear atomic. The compiler would have to do those three steps anyway – copy i to an index register, increment i, THEN store it. Still, it hurts to write it down that way. To understand what’s going on, it might make sense to write it as:
          j = i;
          a[j] = i++;
          which at least to me, hurts a little less than incrementing i in two places. Whether one solution works better than the other is architecture-dependent.

    5. Those are just very obvious examples of problematic code. Imagine you build the sum of two function results, but the functions have side effects on each other. In C++, that code would actually be well defined, in C it is not.

  1. i have mixed feelings on rust but my big concern is tooling. 5 years ago i tried to install rust on Debian arm-linux-gnueabi, and i met an astonishing array of problems. the parts of the rust community that i was exposed to were already quite arrogant about the fact that rust was already a fully-supported language and the only reason not to use it is a head-in-the-sand kind of intransigency. the community has not changed their tune one bit in the intervening 5 years, so my gut reflex is that the toolchain is still not remotely ready for mass adoption. but probably that’s wrong and it has matured.

    the only thing i met that i didn’t think was a reconcilable difficulty was the integration with cargo, roughly rust’s answer to gradle. i do not like dependency systems so tightly integrated with compiling. i have so many objections about the kind of code that tends to result, but the problem i had in that moment was just that it was very brittle and there was no way to bootstrap. and i wasn’t the first that had come along and tried, i found other people had already made the same braindead hacks that i was contemplating.

    for context, i like java on android alright but the tooling problem remains unresolved…every time google has forced me to switch, they have already got an experimental toolchain simmering which isn’t usable yet but has already obsoleted the mainstream one i’m on. it is very discouraging. my impression is that people who would use cargo in the first place would replace it with something newer and more bloated and more brittle.

    my impression has been that the linux kernel developer culture would not tolerate such a toolchain disaster, but i am nonetheless afraid that these problems will go unresolved and yet the culture may become so inward facing as to tolerate it. mere paranoia on my part?

    1. So the premise here is that while C bugs are found and fixed, the rust compiler receives no developer attention and is in the same state it was in five years ago. Is that what you are saying?

      If you want to see a ‘toolchain disaster’ look no further than the Linux kernel, the whole configuration process is completely brain dead, apparently they have infinite tolerance.

      1. So completely brain dead its the core of almost every embedded device, phone, server and supercomputer, despite the stupendous budgets some of those projects have that would let them build a ground up or pay however much M$ wants to unlock having more threads than the standard licenses allow…

        1. Releax, he wasn’t saying bad things about your girlfriend, nor your favorite OS. He was talking about the tool chain used to build your favorite OS. Do you have a lot of experience in building Linux from source?

          1. Done many a recompile, and had a look at from source but not actually got around to trying it yet – so yes I know its tricky, requires a fair amount of work, and could undoubtedly be improved somewhat but its also very functional and I really don’t think you can support all the many levels of hardware and all the varied software needs the way Linux does and really be much better or simpler.

      2. no that’s not what i said. i said i’m afraid that the developer attention it is given will be contrary to my desires as an infrequent kernel developer. it is a reasonable fear but not exactly a prognostication.

    2. 5 years ago i tried to install rust on Debian arm-linux-gnueabi, and i met an astonishing array of problems.

      Rust has come quite a long way in 5 years.

      I can’t remember if arm-unknown-linux-gnueabi (as LLVM calls it) was supported for hosting or just targeting back then, but it’s listed as “Tier 2 with Host Tools” now, which basically means “We test it as well as we can, but the closest hardware our CI build farm providers will give us to run native testing on is aarch64-unknown-linux-gnu, which is Tier 1″.

      On the targeting side of things, there is also ongoing work to build a rustc_codegen_gcc backend for rustc so it’s not limited to what LLVM supports and can also target anything GCC supports.

      the parts of the rust community that i was exposed to were already quite arrogant about the fact that rust was already a fully-supported language and the only reason not to use it is a head-in-the-sand kind of intransigency.

      Do you remember where you encountered them?

      I’ve been hanging out in /r/rust/ since before v1.0 and dropping into the rust-lang forums periodically, and I didn’t get that impression, but, on rare occasions, I’ve seen people popping into /r/rust/ with claims like yours and, every time I’ve honestly asked what venues are missing my attention, I’ve failed to get a response.

      (That said, I do definitely remember that around five years ago seemed to be the height of those irritating “Rewrite It In Rust!” fanboys that we kept having to run damage-control on.)

      the only thing i met that i didn’t think was a reconcilable difficulty was the integration with cargo, roughly rust’s answer to gradle. i do not like dependency systems so tightly integrated with compiling.

      You can use rustc directly if you want, and, last time I looked into the patches in detail, that seemed to be the direction the kernel devs intended to go… people just generally don’t find it desirable to have to go back to manually building up their pkg-config calls and GCC command-lines, so you typically only see rustc being invoked directly by the Rust support for meta-build tools like Bazel.

      For example, according to Integrating Rust Into the Android Open Source Project, Google is calling rustc directly from their Soong build system.

      and there was no way to bootstrap

      Rust is hardly the only self-hosting compiler out there, and there now exists a special bootstrapping compiler named mrustc which is written in C++ and implements just enough of the language to bootstrap rustc and Cargo, given source code that already pasts the checks on the self-hosting compiler. (At the moment, the newest version of Rust it supports bootstrapping is 1.54.0.)

      There’s also the rust-gcc project working to write a Rust compiler fully in C++, with the intent to upstream it into GCC, which will be able to bootstrap rustc once it gets sufficiently complete.

      1. with all due respect, when i did my work with rust years ago, people had already told me that arm was a thoroughly-supported host for building and running. it wasn’t true then. now you’re telling me the exact same thing. my faith is shaken. i’m not saying it’s not true, but i am once burned twice skeptical.

        as for venue…i saw buzz here and probably on slashdot indicating it was mature, much ballyhoo about 1.0. you can look in this post today, and see many comments saying “that hasn’t been a problem since 1.0.” that mistaken boast continues. when i failed to install it using debian apt, and then failed to install it using the directions on rust-lang.org (rustup did not work on arm), i went to https://github.com/rust-lang/cargo/issues/2110 and saw that the problems i faced trying to bootstrap cargo were a wall much-encountered, but the only ‘fix’ was an obsolete hack of a script, but nonetheless the bug had been marked fixed — this is an extraordinary arrogance, to say that an unmaintained script is sufficient for this essential function of toolchain support. even though a significant number of distributions (such as debian arm) still were obviously having trouble with it. i understand why it happens, but it is the opposite of the boasts you see here that rust has such a comprehensive community support that it can be considered stable and broadly available.

        then i went to freenode##rust where i was told to join mozilla##rust, which i only watched briefly — i saw that as i entered, someone was in the process of berating a new user for trying to use forward-incompatible code that had been written for rust 1.2.0 and used std::collections::VecMap even though it was marked unstable at 1.2.0 and had since been removed. i went back to freenode##rust and told them of this ridiculous experience of breaking std at minor releases and was told point blank that std hasn’t broken forward-compatibility since 1.0.0. ok, but people are using the ‘unstable’ features in the wild.

        freenode##rust members then proceeded to tell me that rustup works on arm, and then accused me of using poor taste for trying to develop on a non-x86 platform.

        anyways, i believe the problems i experienced with rustup were ultimately trivial problems in the grand scheme of thing, and were rectified some time after i reported them. but, man! i was told it was all working effortlessly and then in the end i wound up building rustc and cargo from source because the binary distribution had simply gone missing and i was the first to notice. it was a very painful and involved experience that ultimately was unrewarding because the project i wanted to work on had already fallen to forward-incompatibility problems. fwiw i did contribute one upstream fix to rust that was incorporated, just an x86-centric typo in one of the shell scripts, probably in rustup i don’t recall.

        i think it’s totally natural for languages and toolchains to have growing pains. it doesn’t mean anything except that i hope the kernel development community rigorously tests these claims before forcing me to install a rust toolchain again.

        anyways, thank you for your reply. it is good to hear that the kernel development may not depend on cargo! it was *much* easier to build rustc than it was to bootstrap cargo!

      1. Right? The assumptions are wild. This person has clearly made up their mind using outdated information and *thinks that’s okay.*

        Classic insecure programmer bullshit. These attitudes are what prevent software engineering from being a legitimate discipline. Grow up and let it go, people.

  2. One thing to keep in mind when writing Rust-C interfaces is that the memory guarantees of Rust only apply if your interface holds up to its promises. Especially in kernel code where the C interface is towards unprivileged userland, a mistake in interface code could allow e.g. passing invalid pointer as a “known safe” reference into Rust code.

    1. I have used rust in various projects in embedded systems an I appreciate it’s performance. But rust guarantees memory safety something that C hands it to developers. So I say expect more bugs when rust is introduced in Linux kernel we can’t just mix water and diesel shake them inside one container and at the end expect them to mix, No.
      If rust is so great like me and you claim then let’s get our feet wet, we can write our own kernel purely in rust instead of risking to ruin our very own Linux Kernel. If this a way to bring C down we need to stop hitting around the bush.

      1. Er, it’s not a way to bring C down. And kernels are being written purely in Rust but that’s not really relevant … it would be somewhere between decades and never before they are as functional as the Linux kernel.

          1. Well, one idiot isn’t “the C community” … that’s doing the same thing they are: reaching a sweeping conclusion about a programming language based on one thing one person said.

      1. Obviously they didn’t. If anyone shows terrible judgment and bad personality traits that indicate they should be kept away from projects one values … it’s not Beingessner or the language he was just one of many people involved with. (One really can’t overstate the foolish intellectual dishonesty of that train of reasoning for rejecting Rust.)

    1. Okay, so it’s all right for you to use derisive and dismissive comments about older people, because you’ve had older people make derisive and dismissive comments about people your age. Can you show examples in the current discussion? Grow up.

    1. That’s why Cargo requires you to specify the exact version of every library you want to pull in. There’s no “npm update” or whatever – Rust programs are reproducible anywhere on earth at any time.

      1. That’s not true. Cargo uses the same version model as NPM. There is `cargo update`.

        Cargo even copied NPM’s mistake of needing a special flag to obey the lockfile. Like NPM it is ignored by default. For NPM you have to use `npm ci` instead of `npm install`, for Cargo you have to use `cargo install –locked`. Guess how many people do that!

        But that’s really a minor issue. NPMs issues are mostly cultural and Rust has mostly avoided them.

    1. Following up, I think that is why I’ve never REALLY made an effort to learn C.
      Just when I think I understood something, I was wrong!

        1. C is an incredibly complex language. From its syntax rules, to its semantic rules, to what is and isn’t undefined behaviour, the language takes a lifetime to learn to use correctly, consistently. Don’t let the lack of modern features fool you: emergent complexity is right around every corner and the language will not save you from itself.

          The most fatal problems it has is not that it has so many opportunities for undefined behaviour, but that figuring out when your code contains undefined behaviour is not something you can answer in the negative without an intractably complex process of formally verifying your code. -Wall, fuzzing, and static analysis tools can only take you so far, and the innumerable CVEs caused specific because of C bugs is testament to the fact that these things are not sufficient.

          1. C is not complex at all. The way you have to DO things in C, because of C’s simplicity, is complex. This could be an argument for using a more complex language, so that the programmer doesn’t have to do complex things, but that would be a poor argument.

          2. any language is going to require time to learn to use it well, to get around the obvious gotchas. rust and C are no different in this regard. C’s marginal advantage is that i’ve spent 30 years learning C. sunk cost is real.

            but i find the undefined behavior fear-doubt complex kind of troubling. it’s true, the C standard leaves a lot of things undefined. it’s inevitable in such a combination of a low-level language and a portable one. and it’s also true that there is a fad in compiler development, especially in gcc and in clang, to say that “undefined” is a synonym for “wrong.” that if the standard doesn’t say exactly what should happen then the compiler should be free to do something the programmer obviously did not intend. honestly, i’m madder than heck at that fad. but also, gcc has relented somewhat (and presumably clang has too), and maybe with a commandline option like -fwrapv, you can pretty much avoid all of the real life downsides of undefined behavior. so it *is* a challenge with C, but also if the reasons you are using C are pragmatic ones, there are good pragmatic work-arounds.

            as for the upsides of undefined behavior, i am unfortunately intimately familiar with it. for most of my purposes, it is safe to assume that C has 8-bit char, 32-bit int, signed twos-complement, free conversion between intptr_t and void*, and so on. the standard allows some wiggle room, but pragmatically, you can count on it, at least for plus minus times. but then divide! what is INT_MIN/-1? i kind of expect INT_MIN, but also most platforms will trigger SIGFPE (or equivalent) for it, just like x/0. i made it 28 years with C without learning that, so it truly is a little landmine. but it also means that C can use the native instruction set, it doesn’t have to prefix every divide with a test for overflow. it is an essential compromise for efficiency in a low-level language. it’s troubling but it’s not arbitrary, and specifically *it’s a hard problem*. the better solutions have huge trade-offs.

            the problem is much harder with floating point formats…i work on a target with 3 float formats in 3 sizes each, with a fourth fixed-BCD format, and support for 30 years of active CPU development. so depending on the target flags, there may be 5 different machine idioms for performing a conversion. a single CPU instruction, a short sequence of CPU instructions, punt to runtime, the whole mix. some of them will trigger a CPU exception on overflow, some will wrap, and some will force to 0 or clamp at min/max. and then the optimizer can do constant folding, but it needs to decide which one of those runtime hacks to emulate. it’s a *really* thorny problem, and if you are writing floating point code that uses multiple formats, or if you’re converting a float to an int, you can run into actually sticky situations.

            but again, there’s no other solution. a fully-defined arithmetic would demand constant checks for overflow. we wouldn’t be able to use the single-instruction conversion on the newer hardware. in the end, the complexity would force us to punt to runtime for all conversions. it would be a complete non-starter from a performance perspective.

            so i don’t know, it’s complicated. but in practice it’s not as bad as the worst case that people fear, and you really won’t find much better in a low-level language. i’d be curious to know how rust deals with undefined arithmetic but it’s simply a fact that they’ll be making trade offs, there’s no free lunch.

          3. Yeah people don’t take Rust seriously so much that they’re adding it to Linux, and Google, Microsoft and Amazon all use it now.

            The level of ignorance in these comments is truly mind blowing! Who *are* you people?

          4. I programmed primarily in C for over 30 years and on and off for another 15, and was on the C Standards committee, and have also programmed in numerous other languages, and I read the language specs of the languages I use, and in my informed view you’re talking rubbish. Again, C is *dangerous*, but that danger is not a reflection of *complexity*. It’s apparent that, despite my explicitly making the distinction, you still don’t understand it.

          5. ” there’s no other solution”

            There are other solutions. For instance Zig has a notion safety-checked undefined behavior, where safe (Debug or ReleaseSafe) builds include checks that undefined operations aren’t being performed, and it includes separate arithmetic operators that guarantee wrapping and saturating semantics for when that is needed.

        2. “Attitudes like this are why people don’t take Rust seriously.”

          Attitudes like what? I didn’t even say anything about Rust. And frankly, it would be plain stupid to not take Rust seriously because of some attitude by one person expressed in one comment.

  3. Personally, I’d like to see Zig used in this capacity rather than Rust. While it still has some rough edges, I like the syntax and capability better than Rust. Plus it doubles as a vanilla C compiler already.

    There’s an effort to compile the Linux kernel using LLVM. Anybody know if there’s a similar effort to compile it using Zig?

    1. According to its author Zig 1.0.0 is still at least 2 years away, and there are 130 accepted language change proposals that have not yet been implemented. Rust has a huge development community; Zig has a very small one. If Andrew Kelley were run over by a bus the language would probably die.

  4. Linux is the most successful OS in the world, written only in C and assembler. Exactly because C and ASM is the combination that enables this. Don’t mess it up by adding lamer languages.

      1. The gcc allows people to bootstrap an OS onto new hardware using a minimal C compiler. By adding Rust into the kernel, one adds a dependency which assumes 2 compilers are fully operational i.e. more porting work/risk… As such, BSD may replace Linux as the initial OS environment on some future chips.

        The Rust crowd seems desperate to drive language adoption, and from time to time will ramp up the hype/AstroTurf in an attempt to get devs to care. However, unless people plan to refactor the millions of lines of C into rust, the project vulnerability coverage goal is unfeasible.

        Why not create a pure Rust kernel fork called Awesome OS? Oh right, no one would care unless their infrastructure already relied on the legacy software.

        1. Because creating a pure Rust kernel would take years, maybe even decades to get close to what the current kernel is – its already got those decades of development and bug fixes.

          Where sliding Rust into the existing kernel might create some interesting new bugs (but that happens when introducing new c code too) and you don’t have to write 20 years worth of code to start benefiting from the harder to screw up Rust language in the kernel, it can slide in bit by bit module by module transparently to the user.

          1. “decades of development and bug fixes”
            Which will effectively be thrown out when porting to Rust.

            I’m not saying peoples hubris will stop them from trying, but rather expressing bemusement at the cyclical nature of these common mistakes (e.g. Second-system effect).

          2. “Which will effectively be thrown out when porting to Rust.”

            You seem confused. You asked “Why not create a pure Rust kernel fork”, which essentially would be porting Linux to Rust. The response was that that would require “decades of development and bug fixes”, as opposed to slowly adding Rust components into Linux. Only you are talking about “porting to Rust”.

      2. “I’m not convinced that C is the secret sauce that makes Linux great.”

        i’m not so closed-minded as to believe that greatness depends on C. there are many viable languages out there, each with advantages and disadvantages. and even for kernel development, some of them are palatable or even (possibly) preferable to C.

        but i just can’t resist pointing out, C and unix are almost synonymous. developed by the same people at the same time. they really do go together. maybe there is a time for them to part ways. the future always holds room for change! but the history is deeply intertwined. linux really would be nothing without C. the specificity of history is not arbitrary.

        1. It seems like the major issue people have with C is undefined behavior. There are two simple fixes for this:
          1) add keywords to C wherever there is undefined behavior, and stop using the old ones.
          2) define a new language (with a new name that is NOT Rust), with the undefined behavior fixed, and port C code to that.

          To a certain extent, the world has done a great deal of the first option, what with things like safe string and I/O functions that are meant to replace the extremely unsafe C originals, and integer types that have explicit, defined sizes.

          All C needs is some big-time deprecation of bad things that should have been eliminated decades ago.

        2. “C and unix are almost synonymous. developed by the same people at the same time. ”

          UNIX and its utilities were originally written in assembler.

          Anyway, no one is talking about them parting ways, just about introducing Rust components.

    1. I think “Linux is the most successful OS in the world,” because it was free.
      Many people (including corporations) saw it as a way to make something without being “beholden” to someone behind the curtain pulling strings.

      1. Linux is based on Unix, and Unix was hot.

        I think I knew about it in 1981, and it was presented as this great OS. Certainky I knew by 1983. By that time there was Microsoft Xenix, a legal version of Ujix. There was Mark Williams Concurrent, which Denjis Ritchie said was a real ujix but not copied. There were other unix-like operating systems, both based on the sourcecode and clones. There was Microware OS-9, which wasn’t trying to be a clone, but was multiuser, multitasking with redirection and pipes.

        I wanted Unix from that time, but neither the software or hardware was within my reach. So I got a Radio Shack Color Computer in 1984 to run OS-9.

        Soon AT&T had a Unix desktop, and even a laptop.

        There was at least one magazine about Unix. There were lots of books, both about administration, and for users.

        In 1986, Richard Stallman wrote about Gnu in Dr. Dobbs. It was desirable be ause Unix was desireable.

        Even if Linux had been Windows priced, it was cheaper than Unix. Nohody talkss about Unix much anymore. Linux won.

  5. Rust is a different language every 3 months. Something written in Rust using the latest features today cannot be compiled in rustc from 3 months ago. This isn’t an intrinsic problem with Rust the language, but instead a problem stemming from pretty much only bleeding edge types that don’t care about forwards compatibility being the bulk of the types that write code in Rust. This will be a sever problem for any Rust included in the linux kernel for the next handful of years but maybe in a decade or so there’ll be enough Rust coders that not all the code is forwards incompatible and obsoleted in months.

    1. It think you’re confusing forwards-incompatible *code* with forwards-incompatible *compilers*. Forwards-incompatible code – where your old code won’t compile on new compilers – would be very bad. Fortunately, this is not an issue with Rust. You might want to read up on their “edition” system.
      The other thing you seem worried about are forwards-incompatible *compilers* – in other words, that people might need non-ancient compiler versions to compile software. This isn’t much of an issue to begin with, but it’s also solved by the edition system.

      https://doc.rust-lang.org/edition-guide/editions/index.html

      1. i can tell you from my one exposure to rust that i found forwards-incompatible code. maybe it was a fluke. maybe i did something wrong. surely, the original author did something wrong. in fact, i hope and pray that he did, because the idiom he was using was really braindead…it called out for a generic type but he didn’t use it even though rust supposedly has it. he cut and pasted an idiom 20 times for all the different types that he wanted to support in the interface, and one of those types was transient, didn’t exist in my slightly-newer rust compiler. i don’t know why.

        the claim that rust as a social-technological phenomenon doees not suffer from severe forwards-incompatibility is not going to convince people who have casually used it. it is probably a surmountable problem, especially in the limited domain of kernel development.

        it’s specifically for the reason superkuh speculates. people who are using rust are into novelty, and those people are going to be using idioms so close to the bleeding edge that they get cut. this tendency must be overcome to be a stable language for kernel development.

        1. Every story I know about supposed forward incompatibility involves the use of (clearly marked) unstable features in the language, which by definition might be changed or removed. Rust’s forward compatibility guarantees only apply to stable features.

          1. yes, that’s a fine way to say, forward-incompatibility is still a rampant problem. adding and then dropping unstable interfaces has caused problems for developers, and those problems continue to this day. we’re saying the same thing.

    2. Some of the first Rust I ever wrote back in 2017 still compiles just fine today, the language does not change “every three months”. What Rust does do is editions every three years, where they can move something from stable to deprecated or deprecated to removed. But editions are opt-in rather than opt-out and older editions still get updates, the Linux kernel could simply choose to stay on the 2021 edition of Rust and never lose any of the features that might get removed in the next 3 or 6 years.

      Your statements might have been true in 2014 before Rust hit 1.0, but that hasn’t been the case in a very long time.

      1. it was true in 2016 when i used it.

        i mean, you said the same, “Some of the first Rust I ever wrote back in 2017 still compiles”. some of it does. some of it doesn’t.

        1. I’m… pretty sure that they weren’t using “Some” in that sense.

          I’d certainly be very curious to hear any stories you have about Rust code that no longer compiles today (so long as it wasn’t making use of unstable features at the time, which may have been changed or removed)

  6. I just don’t get it.
    At the machine-level, it’s still running machine-code (assembly), so no matter what language you use, the machine code has access to memory-locations that aren’t in the scope of the variables defined in whatever higher-level language.
    So, unless they start making assembly instructions, themselves, with limited-access, those sorts of bugs will always be possible.
    E.G. the difference between strcpy and strncpy, only being that strncpy has an additional bit of code telling it to stop copying if n is reached. But, n is a variable, too, so as long as n can’t be corrupted, everything’s peachy, right? But who’s to stop n from getting corrupted?
    Seems to me, all this adding new layers of protection is merely moving compiler-errors/warnings to runtime errors, which now require proper handling that folk would be even more likely to overlook, or not even know to bother to look, saying “It works fine!” when it really doesn’t.

    Seems to me like the practices inspired by all this is akin to the practices of using strcpy blindly; a regression, not progression. Strncpy was made because folk were using strcpy unaware (or unexpecting or uncaring) of its potential flaws. So, now, we’re going to teach our new programmers to do the same?
    “Hey, but it won’t be a security hole!”
    OK, so someone writes a string cut-paste interface in a text-editer whose clipboard is only 500 characters long… because they forgot to increase the buffer size after a quick test. It makes it into an end-product. Great. A grad-student or legal-professional just cut two hundred pages, pasted it elsewhere, and lost all but the last. At that point, a virus might be less destructive. And unless they’re running that editer from the command-line and watching the error messages they’ll have no idea it even happened, until after they save their changes.
    “Oh, but it forces you to handle such cases!” Please.
    I’m not at all convinced Google’s interest is in the security it provides, as opposed to the less-experienced programmers they can hire.

    1. To a degree yes.
      Higher level languages don’t patch any problems in the underlying hardware itself. (with exception to the firmware configuring the underlying hardware that is…)

      But different high level languages makes it easier or harder to do certain mistakes/flaws. This is one of the merits behind Rust when compared against C.

      Though, I have stumbled onto many Rust programmers that swears that inherently makes any program immune to memory attacks of any kind, and that just isn’t true in practice. If the hardware is an open book, then no high level code will close it.

      1. “makes any program immune to memory attacks of any kind,”

        Well if true I have an open position for them…
        Unfortunately there is no such thing as memory attack immunity. At least you can try to avoid it, but it’s things that “clever” language can’t stand well (ie, why do you want to test this equality twice?).
        Even in C you have to abuse volatile keyword and turn off a lot of optimizations.

    2. “the machine code has access to memory-locations that aren’t in the scope of the variables defined in whatever higher-level language.”

      Are you not familiar with how memory paging works on modern systems? If the machine code attempts to write or execute memory in pages that are marked read-only or no-execute, it won’t work.

      So the answer to this:

      “But, n is a variable, too, so as long as n can’t be corrupted, everything’s peachy, right? But who’s to stop n from getting corrupted?”

      is the system’s MMU.

      1. Isn’t memory paging virtual memory mapped to storage?

        But regardless.
        Yes, a lot of architectures do have methods of securing memory from unintentional writing and execution. And some architectures even have read protection too. But depending on the architecture, the security features on offer can vary drastically in how they work, if and when one interacts with them, and what they protect.

        Depending on how the data in need of protection is secured we can have various insecurities to consider.

        If it is only write protected, but we desire the information itself to be secret, then our write protection is more or less pointless, since read protection were desired. (Read protection is typically adding access latency, so architectures optimizing for serial performance can at times skip this.)

        If it is read protected, but on a linear hierarchy, then there wouldn’t be anything stopping another process with the same or higher hierarchy level from accessing the data. (A linear hierarchy is relatively easy to implement in hardware, since it bases access rights on a single value.)

        Now, one can go further and either support branches in the hierarchy, or just handle access rights on a per thread/process basis instead. But both of these adds even more complexities and usually access latency.

        For a while, memory security where more or less only applicable to the kernel itself, everything else runs in user space where any thread can read and write to anything. This isn’t as much the case these days, but on X86 as an example, it isn’t trivial to “just add” security, since that throws a wrench into backwards compatibility, it is a similar story on other architectures that started out life without consideration to security.

        A simple program I use at times to prod/view variables in applications I am testing is a program called cheat engine, it reads out the memory contents of whatever program one points it at, the OS doesn’t even care… So far almost every application has been an open book that hasn’t been write protected in the slightest.

        1. Paging is the basic process of splitting memory up into pages, period. Those individual pages can have access rights and be mapped to physical memory (or effectively to storage via page faults).

          1. Could explain a thing or two. But relegating memory security to the OS is ripe for abuse, and that were what the cheat engine example were intended to point out, or rather how little actual security there is in practice.

            If one has an application that needs to properly secure data, then nothing other than that application and those it has accepted to have access to the data should be able to access it, not even the kernel is an exception. The kernel could unallocate it, but then the (hardware) memory management system should fully erase the data in the allocated space before anything new can access it. But I have very rarely seen such memory security practices out in the wild.

            (A hobby architecture of mine however does have access rights on a per thread/process/group basis, mainly to explore various pros and cons of having explicit hardware memory security on such a basis. Access time has been the main thing that gets slowed down, since the hardware has to check access rights, and caching the access permissions for future accesses is another fun can of worms, since permissions can change over time. Not to mention the large swath of hardware reserved memory for storing access permissions to the allocations.)

          2. “Could explain a thing or two. But relegating memory security to the OS”

            I *highly* doubt you can do it any other way. The OS can just lie. It can emulate any trust hardware if it needs to. The only way around it would be to have the userspace stuff directly interact with hardware, and that’s a *worse* problem. Otherwise it’s just making system calls, and again, you just… lie.

            Access rights on a per-process basis? Just don’t update the process identifier when the new process is created. Something more creative? Screw it – just emulate the entire architecture in a single process!

            Some of this depends on what you call “an operating system” – there are multiple ring levels, obviously. But using Cheat Engine as an example of how there’s no memory protection is a false analogy.

            No matter what, if you can rewrite the operating system you can break memory protection. Pretty much the only way around that would be pre-shared hardware keys like old video game systems, and that’s a terrible security architecture.

          3. “But using Cheat Engine as an example of how there’s no memory protection is a false analogy.”

            It were never to point out that “there’s no memory protection”, just that protected memory on X86 as an example isn’t as protected as most might expect it to be.

            But I do agree that “true” security is arguably impossible to get.
            But a lot of current systems are lackluster at best.

          4. If you can find hardware without something akin to systems management mode then you might be able to truly protect memory.

            So long as there are no hardware debuggers available and your software can test that it isn’t running in a VM…

            There is still no fix for disabling bootup memory wipe and just fast power cycling. Best just keep important information in processor registers…but context switches and processor state saves on the stack. Nothing is safe if the hacker owns the hardware and has resources.

          5. “just that protected memory on X86 as an example isn’t as protected as most might expect it to be.”

            I don’t understand how it did that. It’s extremely well protected. I mean, the fact that you had to jump through Spectre/Meltdown hoops to do it indicates that, and now you can’t even do that. You just literally installed an operating system component designed to defeat it.

            It’s like saying “look, locks on your front door don’t do anything” and demonstrating it by opening a door without a lock.

    3. > Strncpy was made because folk were using strcpy unaware (or unexpecting or uncaring) of its potential flaws.

      That’s not why strncpy was made. It’s meant to copy strings to fixed-width string fields in particular data formats. It’s not a “checked strcpy”. (If it was, it would be a horrible design, what with not adding the trailing nul if the buffer size is reached.)

      1. Yeah, the main use of strncpy was for copying UNIX directory entries, which were zero-padded but not zero-terminated. It has awful semantics that have resulted in many bugs because people use it without understanding those semantics and thinking wrongly that it’s a size-checked strcpy.

  7. I’ve used Rust a little, though mostly at the “Hello, World!” level.

    What strikes me about it is that there are things where you have to be much more explicit than you do in C, just to make sure that everything is really handled correctly. In C, you wouldn’t have to be that explicit, and as long as you’re working on a PDP-11, it’ll all work the way you expect, unless you’ve got the compiler optimization cranked up too high.
    K&R says that C will convert types fairly easily, “though not with the wild abandon of PL/I” (for my fellow old people), and doing that means sometimes the amount of memory allocated for things is wrong.

    Memory safety’s valuable – C is too often willing to allow use-after-free bugs, as well as of course running past arrays (though usually the String libraries in most languages do a good job of preventing that if you use them instead of using the really friendly C idioms like “while (*a++ = *b++) ;”)

  8. Hm. I’ve never thought of C as a low-level language, rather contrary. That would have been Assembly language.
    I thought it was a high-level language, just like Pascal and BASIC.

    1. I don’t think it’s useful to call something “low-level” or “high-level”. You can say that X is lower/higher level than Y, but there are no absolute “low” and “high” buckets that things fall into.

    2. It’s all relative. There are languages that are much higher level than C … or Pascal or BASIC, all of which are ancient (although there was at least one much higher level language in ancient times: CPL, from which C is remotely derived).

  9. How about the Nim language? It compiles directly to C. Honestly I think this should be a requirement for a kernel language. C is very simple and easy to write compilers for. C compilers are well researched. If one day we’re feeling fancy, we can write a direct Nim compiler.

    It also appears more friendly to new programmers, which may be familiar with more Python-esque programming. I think Nim is a great tool for this!

  10. A better approach would be to write the driver in user space in any languages (like many usb device drivers) instead of making rust unsafe and shoving it into the kernel.

    I would prefer that rust not be merged into the kernel mainline.

    1. “A better approach would be to write the driver in user space in any languages”

      That’s simply not possible without a massive performance hit in many cases. USB 2.0 userspace drivers are fine because the performance needs are so low.

  11. Everyone know that system programming is all about STRINGs and safe pointers manipulation !

    Realy, it has been years since C++ proposal for the kernel has been issued with a negative feed back.
    If this’ve been done back then, almost 90% of every bug could have been fixed easily. The remaining 10% is due to CPU manufacturers.

    Now everyone is looking for a fix but really C++ is the way to go. That where most system level qualified devs are.. I would take it badly if a unprooved langage take place instead. The majority of qualified peoples would leave and fork, replaced by noobs..

    Also remember, nowadays No Linux No internet !

    1. From what I remember, Torvalds is opposed to C++’s abstractions because too many of them are either tied heavily to userland assumptions or intertwined with deal-breaking design decisions like the use of exceptions or other such things.

      That hasn’t changed, while Rust made decisions more favourable to what Torvalds considers suitable, such as the use of tagged unions for idiomatic error returns instead of exceptions, and the inability to accidentally make a struct no longer POD, since dynamic dispatch isn’t a property of the struct and is handled through fat pointers.

      That said, his approval was still conditional on the team in question reworking allocation for any standard library types they keep so it doesn’t panic on failure.

  12. The joys of shooting yourself in the foot with c should not be overlooked. How many people here had there first malloc pointers go wrong by writing straight through the frame buffer.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.