Need A New Programming Language? Try Zig

Maybe you’ve heard of it, maybe you haven’t. Zig is a new programming language that seems to be growing in popularity. Let’s do a quick dive into what it is, why it’s unique, and what sort of things you would use it for. (Ed Note: Other than “for great justice“, naturally.)

What Is It?

You’ve likely heard of Rust as it has made significant inroads in critical low-level infrastructures such as operating systems and embedded microcontrollers. As a gross oversimplification, it offers memory safety and many traditional runtime checks pushed to compile time. It has been the darling of many posts here at Hackaday as it offers some unique advantages. With Rust on the rise, it makes sense that there might be some space for some new players. Languages like Julia, Go, Swift, and even Racket are all relative newcomers vying for the highly coveted mindshare of software engineers everywhere.

So let’s talk Zig. In a broad sense, Zig is really trying to provide some of the safety of Rust with the simplicity and ease of C. It touts a few core features such as:

  • No hidden control flow
  • No hidden memory allocations
  • No preprocessor, no macros
  • First-class support for optional standard library
  • Interoperable by design
  • Adjustable Runtime Safety
  • Compile-time code-execution

The last one, in particular, is perhaps the most interesting, but we’ll come back to that. Let’s look at some code, but skipping past hello world and headed straight to opening a file. Here’s the C++ code:

#include <iostream>
#include <fstream>
#include <string>

using namespace std;
int main (int argc, char const *argv[]) {
  ifstream file("nonexistingfile.txt");

  char buffer[1024];
  file.read(buffer, sizeof(buffer));

  cout << buffer << endl;

  file.close();
  return 0;
}

Now let’s look at some comparable Zig code:

const std = @import("std");

using namespace std.fs;

pub fn main() !void {
    const stdout = std.io.getStdOut().writer();

    const file = try cwd().openFile(
        "nonexistingfile.txt",
        .{ .read = true },
    );
    defer file.close();

    var buffer: [1024]u8 = undefined;
    const size = try file.readAll(buffer[0..]);

    try stdout.writeAll(buffer[0..size]);
}

(Thanks to Erik Engheim for the C++ and Zig sample code.)

As you might have guessed from the file name, the file doesn’t exist. The C++ code doesn’t explicitly check for any errors and in this scenario, it is perfectly valid code that displays no indication that anything failed. Zig, on the other hand, we have to do a try since that file could fail. When it does fail, you get a nice stack trace:

error: FileNotFound
/usr/local/Cellar/zig/0.7.0/lib/zig/std/os.zig:1196:23: 0x10b3ba52e in std.os.openatZ (fileopen)
            ENOENT => return error.FileNotFound,
                      ^
/usr/local/Cellar/zig/0.7.0/lib/zig/std/fs.zig:754:13: 0x10b3b857e in std.fs.Dir.openFileZ (fileopen)
            try os.openatZ(self.fd, sub_path, os_flags, 0);
            ^
/usr/local/Cellar/zig/0.7.0/lib/zig/std/fs.zig:687:9: 0x10b3b6c4b in std.fs.Dir.openFile (fileopen)
        return self.openFileZ(&path_c, flags);
        ^
~/Development/Zig/fileopen.zig:8:18: 0x10b3b6810 in main (fileopen)
    const file = try cwd().openFile(

Removing the try results in a compilation error. The backtrace here is especially impressive because this is a relatively simple language without a garbage collector, runtime, or virtual machine.

Let’s talk about some of Zig’s other features: interoperable by design, adjustable runtime safety, and compile-time code execution.

“Interoperable by design” means that ordinary Zig is easily consumed by C and in turn, consumes C. In many other languages, such as Python, you need to specifically marshall data for C and C++ interoperability. Zig can include C files directly in the main code by virtue of the built-in Clang compiler.  The output of Zig libraries is a .o file that can be fed right into GCC. Functions can be used by C code by just prepending export to the beginning of function definitions. Structs and datatypes have similar ease.

“Adjustable runtime safety” means that many of the runtime checks that Zig has can be turned on or off depending on the application. Things like integer overflow, bounds checking, unreachable code, and others.

You might notice in some code you’ve seen that there’s a data type in Zig known as comptime. You can use it in function arguments and in the program itself. It means that the value must be computable at compile time. It can be used to implement a form of generics or templates. This is a pretty powerful feature that can be used in interesting ways.

What Would You Use It For?

Since Zig is LLVM-based, the targets for Zig include:

  • x86_64
  • ARM/ARM64
  • MIPS
  • PowerPC
  • WASM32
  • RISCV64
  • Sparc v9
  • Linux
  • MacOS
  • Windows
  • FreeBSD
  • DragonFly
  • UEFI

Given that it interoperates with C so smoothly, it’s quite simple to swap out small chunks or libraries for Zig equivalents.

Additionally, Zig can be used on microcontrollers. As a bit of a cherry-picked example, [Kevin Lynagh] recently went through the journey of converting his keyboard firmware from Rust to Zig. Many of Rust’s well-known language features such as features, macros, and pattern matching are used to initialize and scan ports for key presses. In Zig, these are replaced by inline for, a for loop that is unrolled at compile time, and some clever use of comptime. In particular [Kevin] points out the consistency of the language and how it is a language that he feels like he could master.

If you’re looking for inspiration, there’s a Github repo with hundreds of excellent examples written in Zig. There are Gameboy emulators, HTTP/DNS servers, ray tracers, several kernels and booters, databases, and compilers.

How Can I Get Started?

There’s a learning section on Zig’s homepage as well the site ziglearn.org that is chock-full of great resources. Ziglings is a Github project that has small broken programs that need small tweaks to get working again, allow you to get a feel for Zig. Maybe just dipping your toes in the water isn’t enough, and you want to dive into the deep end of the language implementation itself.

130 thoughts on “Need A New Programming Language? Try Zig

      1. I hate it more.

        It wishes it didn’t have a preprocessor, but it does have compile-time code. So it just doesn’t have preprocessor conveniences.

        And it wants to interoperate, which is nice, but it is stuck to LLVM, so I can’t use it in all the same places as C. I’d still need C, because C is portable, not just interoperable.

        And it is enough like C that that would be confusing.

        1. Preprocessors don’t have conveniences relative to compile-time code, which can feature all of the expressiveness and safety properties of the host language without having to learn a second language that fits in poorly with the language you actually want to program in. Having only one language also only requires to write and maintain one set of tools, parsers, semantic analyzers, code completion tools, etc. There’s simply no way preprocessors have any meaningful conveniences over all of these benefits.

          Re:LLVM, that is a limitation, but having to resort to C when you need to does not mean you have to use when you don’t need to.

      1. Only if your focus is minimal programs. Based only on Hello World, BASIC would look impressively terse, as would other zero-boilerplate languages. When you start trying to do real work, you discover the real costs and efficiencies and expressiveness, and the boilerplate becomes insignificant.

  1. Instead of reinventing C/C++ why not just add on top of it? They could easily add these additional functions to the open source compilers with a fork. Enforce try on anything that can fail and add things like “inline for” to the compiler.

        1. Not sure if you’ve heard, but we’re not programming in assembly where we need to actually deal with pointers. We have these fancy things called “compilers” that expose better and safer concepts now.

          1. Not sure you’ve heard but a lot of us actually are still programming in Assembly because the best C compilers don’t necessarily optimize for hardware-specific feature sets.

          2. Assembly isn’t a language. You don’t “program in assembly,” you “program in XXX assembly.” And even that distinction is mostly silly. You might as well say “I write code directly for XXX rather than using a language.”

            CPUs (and thus assembly, at least without extensions) don’t actually have pointers, for instance. They have addresses, which are different: pointers *know what they’re pointing to*. So when you increment them, they increment correctly. If I create a 1024-byte array that I’m trying to store 32-bit values in, I have to increment that address differently than if I’m using it to store 8-bit values in. With pointers you just add, and the language figures it out.

            Same thing with taking the address of something. I can’t “get a pointer” to something in assembly, because everything is either a register (which has no address) or an address already. The reason why getting a pointer to something *exists* in C is because you have a stack, and function contexts. Those are things which don’t really (natively) exist.

            The whole point of a language is to give you abstractions beyond the machine itself, and C does that. It just doesn’t do it particularly safely.

        2. Actually, you’re in for a shock when you realize that pointers are an abstraction. They associate behavior with a memory address, and there exists no such concept in your hardware. The only instructions that processors even have for reasoning about addresses are jumps and branches. There exists no “dereference” instruction.

          The actual problem with C and C++ is that they are based upon a fundamentally and objectively incorrect programming model. You think, because you are probably new to coding, that there exists a concept of “stack” and “heap” memory in your RAM, and that pointers are a means of addressing heap memory. But the reality is that these are imaginary concepts invented by programmers because they thought it would make coding more convenient. While it is often convenient, it is also highly abstract and results in negative performance characteristics. It is much more efficient to reason about only global shared contiguous memory buffers and stack allocated memory, because these imaginary constructs are conducive to cache-friendly data-oriented programming.

          C and C++ are wrong because, idiomatically, they make this impossible. As soon as you call malloc(), you wrote poor quality code. As soon as you use a library that calls malloc(), you wrote poor quality code. As soon as you didn’t handle memory allocation failure or your library doesn’t, you wrote poor quality code. As soon as you attempt to handle errors in C or C++, you’ve written unmaintainable code because there is no standard and composable way to do this in those languages other than C++ exceptions, which have hidden control flow and negative performance characteristics. Plus users just compile -fno-exceptions in release mode anyways.

          If you can write some C or C++ that doesn’t suck, I’ll be amazed. But you can’t. Nor can anyone else on any “real” codebase outside the domain of bare-metal programming. Zig solves this.

          1. A lot of my hardware has a stack pointer.

            This is true if you’re using a memory address, or naming the register in asm.

            “If you can write some C or C++ that doesn’t suck, I’ll be amazed.”

            You’d be amazed that almost every hardware peripheral in most microcontrollers uses registers as pointers in combination with a DMA, so why wouldn’t you be amazed by C, too?

            It might be advisable to tone down the jerk “I’M ABSOLUTELY RIGHT” level if you’re going to try to make a case for Zig. Or anything.

          2. > Actually, you’re in for a shock when you realize that pointers are an abstraction. They associate behavior with a memory address, and there exists no such concept in your hardware

            Allow me to introduce you to indirect addressing modes, the addressing modes you’ll use most often.

            > The actual problem with C and C++ is that they are based upon a fundamentally and objectively incorrect programming model.

            Ah yes. Dennis Ritchie and Ken Thompson, notorious for not understanding how computers work. /s

            > because you are probably new to coding

            Ignorant assumptions compound technical ignorance. I used to teach classes on digital circuit design, Assembly, and C.

            > there exists a concept of “stack” and “heap” memory in your RAM, and that pointers are a means of addressing heap memory

            I don’t think that. I don’t know anyone who does. I recommend taking an introductory C programming class any reputable class if you think this is a common belief.

            > But the reality is that these are imaginary concepts invented by programmers because they thought it would make coding more convenient. While it is often convenient, it is also highly abstract and results in negative performance characteristics.

            Except that it is a 1:1 model of how memory management works in most mainstream CPU architectures.

            > As soon as you call malloc(), you wrote poor quality code. As soon as you use a library that calls malloc(), you wrote poor quality code.

            Ah yes, Linux is such poor quality code. Windows too. And Unix and MacOS. I trust you’re using only software that never allocates memory, and not being hypocritical, of course.

            > Plus users just compile -fno-exceptions in release mode anyways.

            Again, if you believe this is good or even common practice I recommend seeking professional instruction on C programming. Good luck passing a MISRA audit pulling that.

            > If you can write some C or C++ that doesn’t suck, I’ll be amazed. But you can’t. Nor can anyone else on any “real” codebase outside the domain of bare-metal programming.

            If you are unable to write code that doesn’t suck in _any_ language, you don’t know how to program. I recommend seeking a basic education on programming before declaring 99%+ of all code ever written as “poor quality” simply because you don’t understand how they work. I know indirect memory addressing, paging, and virtualization are complicated concepts but I’ve taught enough students that I am 100% confident you can learn them to.

            > Zig solves this.

            Considering a major selling point of Zig is its interop with C, I’m skeptical. The same promises are made with Rust and yet again a major selling point with Rust is that is can interop with C. No language can prevent bad code.

            Unless you’re running the code in a custom VM a la Java, you’re ultimately compiling to native machine code that uses the same instruction set anyway. If you think a language’s feature can go beyond what a CPU’s instruction set is capable of, you’re delusional.

            So me some evidence that a Zig program outputs different machine code than a C program and I’ll take these sorts of claims seriously.

          3. I agree. Those who equate C pointers to indirect addressing in assembly code have forgotten being new to C and struggling with the ridiculous ways of “conferencing” something. Just look at any piece of C that uses registers for GPIO to see computer gibberish. How about something that looks like a function called fetch() and store()? A compiler will turn it into a single instruction. Yes, a security nightmare if there is a network connection but there are so many security nightmares that what is one more?

            MicroPython and others allow inline assembly code. Why all the Python hate in the comments?

          4. Among those of us who wrote lots of assembly code, K&R C was taken up with enthusiasm. (Maybe not by those that thought octal and hex were communist plots but that’s another story…) K&R could be compiled in your head because most of it was directly translatable to machine instructions. The small portion that couldn’t was translatable to macros that were commonly used for any assembly code program of non-trivial size. K&R compiled to assembly which was then run through the assembler. In the beginning we would go over the generated assembly. After a while you realized that the generated code was pretty close to what you would have written.

            That was K&R C. The modern C language is a different animal,

      1. Please don’t lump C and C++ together. By now they are almost but not quiet entirely different languages. Ever since C++11 (a whole decade ago) C++ does no longer rely on pointers. If you are still using pointers you are not writing C++.

        Which brings me to a small rant: The given “C++” example is as much C++ as K&R C is contemporary C.

        The example could trivially be rewritten to fail on not examining the state of the file after opening it. Choosing to not do so is a deliberate choice by the programmer.

          1. … which means nothing.

            I find it quite impressive how C++ has become a modern and powerful language while still maintaining compatibility with with old and shoddy written code.

            So stop blaming the tool for incompetent programmers that keep on abusing it in ways that have been deprecated for 10+ years and mostly exist to maintain compatibility with that old code.

        1. The features in the language are chosen by its authors, why are they in there, if they are bad? Why do we need two language specifications, one for the actual language and another for the “safe” sunset? Why can’t you rip the crap out of C++ instead of saying “don’t use it, there are no compiler warnings but it’s still bad” ???

          1. As the old saw goes, everybody agrees that most of C++ is horrible, but they can’t agree which parts.

            I port C++ to C quite regularly, (allergies) but personally I don’t think they’re very different.

    1. Julia matches or exceeds the performance of C and numpy in many situations. Also can lib wrap R, python, C, C++, and CUDA. I hope the Genie web framework will become more popular, but like all awesome things it is likely a long ways from mainstream production environments.

      Next to the Erlang Elixir project, Julia is the only truly innovative high-abstraction-syntax language I have seen in decades that actually solves some real world practical needs efficiently.

      Go is also interesting, but hard to build a skilled team just like Erlang…
      ;-)

    2. Enforcing try on anything that can fail means practically everything. C/C++ has enough warts and inconsistencies that it’s better to start over. Zig’s integration with C is so good, that it’s an even better C compiler than most others. It’s cross compilation support has no equal.

    3. Modern C++ is pretty great and somehow it manages to be less complex than rust. Maybe we just need a subset of C++ and a compiler that asserts basic level of memory safety. Or maybe I’m just bitter because rustc never accepts my code..

    4. They’ve been adding on top of C++ for decades now.

      I was asked to join an open source project to add a feature. My contribution to the project was just a single class but to actually build the project I needed 2 different package managers, the latest version of Cmake, and 2-hour tutorial on “metaprogramming.” Modern C++ is worse than Node.JS. I like C++ when it is “C with classes”, but all this crazy modern extension just makes the language unusable. Implementing my feature took less than 10 minutes of coding, getting modern C++ to actually build took a week and then a package manager update broke it a week later, so I just left the project entirely.

      The irony is that all this modern C++ malarkey is just C++ trying to be like C#. I recreated the whole application in C# in a week. It built on any Windows or Linux system with dotnet installed with a single command.

      Don’t add on top of good languages, trying to make them into something they’re not. Make new ones designed to be what you want them to be.

    5. The thing about C and C++ is that they were originally invented long long ago, long before the internet and long before the designs of other languages came about (such as those that implement JIT) by some very clever people such as Bjarne Stroustrup

      But one of the problems has been for a few decades now; you can’t change the syntax without breaking backwards compatibility, only add more stuff onto it. So at the time it was invented it was very good, it’s now ended up being a bit of a mess trying to keep up with more modern languages and features.

      The professional approach is to use whichever language is better suited to a particular use case, such as something high level for a gui or web based app and something more low level for a kernel driver. That being said folks always gravitate towards what they’re most familiar with even when it’s not the best fit.

      One big difference is the difference between a statically typed and dynamically typed language.
      Because C and C++ use memory pointers but assume the thing using the pointer knows what it is without always checking, this can be called dynamically typed or more of a hybrid approach.

      With Statically typed the compiler keeps track of the type of pointer (what it represents) as well as the fact that it is a pointer (such with C#). So any bugs such as doing stupid tricks with pointers that would create a buffer overflow for example get picked up as part of the compile process instead of an analysis tool further down the line.

      Another example of static vs dynamic typing would be Javascript (dynamic) vs Typescript (static), where Javascript you can pass in an integer, string or whatever as part of a property and not care, but Typescript forces you to define what something should be on the property rather than just accepting it could be anything.
      Dynamic typing is where a lot of bugs tend to show up but it can be useful in certain cases (such as with Python) allowing you to write less code.

    1. From https://ziglang.org/learn/overview/

      Speaking of performance, Zig is faster than C.

      * The reference implementation uses LLVM as a backend for state of the art optimizations.
      * What other projects call “Link Time Optimization” Zig does automatically.
      * For native targets, advanced CPU features are enabled (-march=native), thanks to the fact that Cross-compiling is a first-class use case.
      * Carefully chosen undefined behavior. For example, in Zig both signed and unsigned integers have undefined behavior on overflow, contrasted to only signed integers in C. This facilitates optimizations that are not available in C.
      * Zig directly exposes a SIMD vector type, making it easy to write portable vectorized code.

  2. Not impressed. It is interesting to me, that we just keep trying to re-invent the wheel so to speak. We have perfectly good and validated languages that can help us solve (have solved) real world problems that have been with us for many years. It really gets annoying :D to see the fragmentation into lots of different languages (R, Ruby, Julia, Go, etc.) that only seem to belong in realm of academia….

    Readability … Just glancing at the code above it seems harder to read. Write a similar application in Python or C and see the difference. Even the C++ code isn’t that easy with streams << and such. The idea should be to 'keep it simple' not more complex. That is one reason that Python took off in our company as a scripting language. Doesn't take a computer science major to understand and use it, yet can be as powerful as you wish in the context of an object oriented scripting languages. Just about anyone can glance at it and see what is going on. You have to try real hard to make Python hard to read (and, yes it can be done). So much easier to maintain across the board. Only the most demanding tasks will require jumping to a compiled language like C/C++. Rust seems to have the most promise of becoming a new language in the programmers tool box in my mind. Right 'now' between Python, C/C++, and JavaScript, and a little Assemby sprinkled in and you have all the bases covered for any application a person would need/want to write now-a-days…. Unless you are a database person that needs some SQL too :) .

    1. “Language fragmentation” is code for “I’m too lazy to learn new stuff”. These languages all have unique and valuable features that increase productivity, but you are content with the old stuff that slows you down.

      1. There’s two sides to that.

        If I can write an application in C in a single day, versus spending a few weeks setting up a Zig-Rust-Go-Laravel stack, working out all the incompatibilities and issues introduced by a massive stack of new and unproven technologies, debugging across multiple incompatible contexts, and then documenting it all across multiple standards of documentation, is it really worth the effort?

        If you’re a consultant paid by the hour to write piles of code nobody knows how to maintain, that’s fine. If you’re a serious professional developer who needs results ASAP you use the best tool for the job. If that happens to be Zig, sure. Using a new tool because it’s newer is not engineering. It’s marketing.

        1. “Using a new tool because it’s newer is not engineering.”

          Exactly. Who speaks Espiranto? After all, it was the new and improved international language of 1887, so it’s had plenty of time to catch on.

    2. “the fragmentation into lots of different languages (R, Ruby, Julia, Go, etc.) that only seem to belong in realm of academia”

      i have mixed feelings about the fragmentation and needless diversity. and i’m honestly not sure too many of these languages actually do any good at their stated goals. so largely i agree with you, but i just wanted to point out… these are not just academic. a lot of these languages come from industry, or from unaffiliated hackers. seems to be that a loh of people are trying to satisfy real needs for their environments, and they have the confidence (or arrogant disdain) to think they should start over instead of adopting something pre-existing. academia is only intermittently involved.

      1. “academia is only intermittently involved.”

        WHAT????
        You mean that inventing an entirely new language is not a requirement for Computer Scientists to get their Ph.D.????

        B^)

        (a Math professor once told me that at one time, in order for a doctoral candidate to prove that really understood their study, they had to develop a new system of math(s) )

        1. Ruby is a popular industrial systems language in Japan, most of the international interest has been in web dev. Not very academic.

          R is mostly used in business statistics. Alternatives seem to be favored… by academic users.

          The observation that “a lot of these languages come from industry” is very valid. In the case of Ruby, it was directly sponsored by some of the largest heavy industrial manufacturers in the world.

          Golang is quite popular in industry too, I’ve worked with it in finance where it has a lot of advantages over other languages. It is like a modern COBOL in that sense.

    3. I guess the readability is going to be very different for different persons.
      For me it is more readable than C and as another example Kotlin is *extremely* more readable for me than Java.

      I can take in higher complexity but I absolutely can’t take long codes. I despise C with its null checks, error checks and other clutter.

    4. Just to warn you – I am not a programmer/coder/software debeloper. But as far as I understand most languages are designed to effectively solve certain types of problems under certain conditions. In time conditions change so do types of problems. Thos is why we still develop new Languages. It is like with hammers – there are many types of them and although all can be used with nails some of them will do it better with different types of nails.

    5. You’re explaining that you don’t want people to create new languages, while simultaneously describing your company adopting Python because it was uniquely simple but also powerful as a scripting language.
      I would have thought that by your standards, Python is just a pointless language that only seems to belong in the realm of academia… yet another example of reinventing the wheel. Or is it just that the languages you know how to use are good, and everyone else is just splitting for no reason?

  3. After watching for about 30 years the senseless “void main (void)”, I discovered a very fruitful declaration in the snipped of code above: “!void” – this is a very fruitful idea!

  4. Can someone please explain to me why new programming languages are bad and why we should stick with C++? Please tell me why it is worthwhile to have pointer bugs, unchecked arithmetic, and use-after-free bugs.

    1. Bugs are bugs and bad programming practice is just that. No amount of syntactic sugar is going to fix that reality.

      I have profiled many C++ programs with no memory leaks and many that do have them. I have also seen Java, C#, etc… programs that can bring an entire environment to its knees and those that run well.

      It isn’t the tool. It’s the person using the tool. Use the right tool for the job. Sometimes it is ASM other times it is Java or Python. It all depends on the task and context.

      1. > Bugs are bugs and bad programming practice is just that. No amount of syntactic sugar is going to fix that reality.

        So surgeons should just use machetes or katanas while operating on your brain or heart right? If you’re right, then the user’s skill is all that matters.

        1. Now you are comparing hardware to software and objects that are very different in design, build and size. With different languages you are still compiling to machine code and still running it on the same hardware, if you use different tools to get to the compiled version, you are still compiling and running machine code on the hardware. If you write a program badly in any language it won’t run well or at all, if you write a program well then it will run well, doesn’t matter which language you choose.

          1. > if you write a program well then it will run well, doesn’t matter which language you choose.

            And if you perform surgery well with a machete then I guess it didn’t matter which surgical tool you used either. What do you think the likelihood is of performing a successful surgery with a machete though?

            The real question is what tools lets you perform surgery better and with fewer errors, or analogously, what tools let you write better programs that have fewer errors? Programmers seem to be the only irrationally ardent defenders of poor tools that carry high likelihoods of poor outcomes, and it still boggles my mind.

      2. “I have profiled many C++ programs with no memory leaks”

        So have I! But they were all toys! The challenge is to use it in an actual real world with actual human developers in an actual development environment, in which case you Always end up with all the usual C++ failures, again consult the CVE database for proof.

    2. Um.. I don’t think we should stick with C++. I don’t think that’s what the majority of people here are saying. I don’t know why the author chose C++ as the counter-example, but I think it’s disingenuous to insinuate that because it’s written in C++, it must be riddled with pointer bugs, unchecked arithmetic, and use-after-free bugs. Good coding standards and discipline can eliminate all of those without help from the compiler.

      FWIW, I love C, but detest C++. I think I can learn to like Zig, but already strongly dislike Rust.

      Being that Zig integrates so well with C and shares a good deal of its syntax and conventions, I see it as a better candidate for inclusion in the Linux kernel than Rust. It just needs more advocacy in the LKML, IMHO; and more examples like https://github.com/Mic92/zig.ko

      1. “it’s disingenuous to insinuate that because it’s written in C++, it must be riddled with pointer bugs, unchecked arithmetic, and use-after-free bugs.”

        Why? This is quite literally true of every C++ program. Can you name a single significant piece of software that does not have dozens or hundreds of CVE entries?

    3. A few reasons whay new langauges are worse than C++:
      – immature tools
      – moving target
      – small developer pool
      – unclear future
      – undiscovered weaknesses in the basic ideas

      So, if what you are working on is small, or one off, or it just happens that you are the developer of the new language, all this is not a problem. Otherwise, you should stop and think calmly about it.

      1. Zig is already a better C compiler than most C compilers in existence. Yes, the Zig compiler can literally compile C code, and even better, it can *cross-compile* C code trivially from any platform to any platform. Also, the compile-time type means the language and its tooling is simpler than C/C++ because you eliminate the separate template/macro languages, eg. no separate parsers, semantic analyzers and you only need one code completion tool.

        Zig is honestly pretty promising as a C replacement.

    1. yeah! this is what i came to say. and just below that, it says “no runtime”, but the example is literally hidden control flow into a runtime error handler. maybe the write up is just bad…i mean clearly they’re trying to say “it’s not java or python” but such an inaccurate description is not turning me on to it.

      i can’t tell if zig is worth anything or not…it thinks it’s the anti-C++ but from this little glimpse it looks to me like the apple falls too close to the C++ tree. hidden control flow isn’t necessarily evil but if your goal is to eliminate it and you instead formalized it, it’s hard for me to ignore the parallels with how C++ was born.

      my biggest problem with the proliferation of languages is that they’re all half-baked….what we need is the language that got it right after learning from so many others. honestly, apart from the wild verbosity (and atrocious OOP patterns that are common at many of the big development houses), java is the only language i think exemplifies learning from the mistakes of previous languages. most languages like rust just repeat the C++ cycle…the advantage isn’t that it’s well-thought-out but that it’s too young to be crufty…by the time it’s useful, it’s picked up cruft, inconsistencies, bizarre syntax hacks, just like the rest.

      and i don’t mean to imply it’s easy. i’ve made a few languages and they’ve either grown awful as i hacked onto them or they had such trivial mission statements they couldn’t help but succeed. i definitely couldn’t do better than these clowns myself…

        1. So some function that uses multiple different functions will return error code mess to the higher level? And how higher level will deal with it? Have to use some stacktrace magic to figure out what’s wrong with args? Say, both open() and write() could return EINVAL or, say, EPERM. Function, say, write_file(), that uses both open() and write() will return EPERM. So, if write_file() returns EPERM, how to figure out what’s the problem? Looks like direct road to hell, if that coding style is assumed acceptable.

          1. “That coding style” is just C style. You know, return codes to signal errors, bog standard stuff. If returning the same error code would be ambiguous, then disambiguate. This advice is no different between C and Zig, but the latter has a much better story around error handling. Just try reading the docs.

        2. what an astoundingly uncompelling answer. not sure who you are, Sandro, to be evangelizing zig, but you’re accomplishing the opposite effect.

          the difference between the answer “the hidden control flow is simple” and “there is no hidden control flow” is important, and you just stated both. similarly, “there’s no runtime error handler” versus “here’s how the runtime error handler is reached” is another crucial difference. and – my god, man – the difference between “there is no error bubbling” and “here’s the simple mechanism by which errors bubble”…!! you can’t state both. i don’t know who you are but this kind of insisting-it’s-what-you’re-saying-it-is thought process is a very clear way to convince the audience that zig hasn’t actually got anything figured out.

          the funny thing is, if the same language was presented differently, i’d have a very different feeling. “look how simple my runtime error handler is!” and “look at this great idiom for explicit error bubbling”. it may very well be unfair to zig that it is being presented this way.

          1. > the difference between the answer “the hidden control flow is simple” and “there is no hidden control flow” is important, and you just stated both.

            No, there is no hidden control flow. “try” is a standard language keyword for control-flow with a semantics whose behaviour I described [1]. You might as well call “break” and “continue” in C hidden control flow because they’re semantically equivalent to goto.

            > similarly, “there’s no runtime error handler” versus “here’s how the runtime error handler is reached” is another crucial difference.

            I agree, and there is no runtime error handler. Not sure how to be clearer on that. Unless you’re suggesting that C’s assert() is a runtime error handler? If so, then what you mean by “runtime error handler” is not what everyone else means.

            > the difference between “there is no error bubbling” and “here’s the simple mechanism by which errors bubble”…!! you can’t state both.

            I didn’t, I said “there is no error bubbling *as with exceptions*”. And that’s literally true, as Grawp has already explained: error propagation is not automatic in Zig, it is purely opt-in by using “try”, and even then to the immediate caller only. This is not remotely like exceptions.

            You are of course free to get your panties in a twist and avoid Zig simply because I corrected your two misunderstandings and pointed you to documentation that explains it. Just don’t pretend like the mistakes weren’t yours and misquote or twist my words to try and make me some bad guy simply because you didn’t read about how the language actually works before making claims about it.

            [1] https://ziglang.org/documentation/master/#try

          2. > Not sure how to be clearer on that. Unless you’re suggesting that C’s assert() is a runtime error handler? If so, then what you mean by “runtime error handler” is not what everyone else means.

            Yes, assert() is an example of a runtime error handler! And it is exactly what everyone else means — C’s runtime error handling idioms are notoriously poor, not nonexistent. And you really have convinced me you are right about one thing: you aren’t sure how to be any clearer than that! And, more to the point, it’s one you usually only reach through explicit flow control…it isn’t implicitly branched to if there is a failed try in main().

            I really have to insist that if you are going to improve on bad / unclear / primitive runtime error handling mechanisms, you won’t really make any progress unless you learn how to be clear when talking and thinking about it. One of the great things about novel languages is that you can invent astonishingly different ways to build a wheel. But the first step in really learning from the status quo is realizing that you are in fact re-inventing the wheel. An erroneous belief that you have obviated wheels entirely will make it very difficult for you to learn from all the wheels that came before.

            I guess I have some space for people too naive to learn from the status quo, who invent everything from scratch…but only if they really do something novel. If I get a whiff of the C++ mistakes being made all over again, I’m gonna absolutely demand that they have a firm grasp on the status quo before giving it a serious look. Sorry.

          3. @Greg C
            > Yes, assert() is an example of a runtime error handler! And it is exactly what everyone else means

            I disagree both that it’s a runtime error handler and that it’s exactly what everyone else means by runtime error handler. assert() can literally disappear at compile-time depending on compilation flags or includes, and it also does not need to allocate memory dynamically. It has none of the features that anyone expects of runtime error handling because it does not permit you to “handle” errors at all, and is not analogous in any way to runtime error handling you’ll find in any other languages, which is the the very definition of the “status quo” to which you appeal.

            I don’t see this conversation going anywhere fruitful because I’m simply not willing to quibble over the semantics of “errors” and “handling” and “runtime” with a case study of the last 40 years of programming languages. I’ve achieved my objective to correct the mistakes you assumed about Zig, so the record is now clear on Zig error handling: no allocation, no runtime, no hidden control-flow because “try” is not any more “hidden” or “implicit” than “break” or “continue”, and control-flow is strictly local, not non-local as with exceptions. Error handling has just as little overhead as C, but it’s much safer and more ergonomic than you could ever achieve with C.

            I see no point in discussing whether or not you like it’s approach or think it’s an improvement. I’m passed the need to have such programming language debates, and I sought only to clarify misunderstandings. Either Zig will succeed or it will not. They’ve made some excellent strategic decisions recently that I think puts them in a good position, and I hope they do succeed because Zig would be much better than the C/C++ status quo.

          4. I hear you. You’re saying over and over again “it’s not C++, Java, or any of the other languages that are in vogue today so I don’t have to use descriptive language correctly.”

            Good luck.

      1. “most languages like rust just repeat the C++ cycle”

        What are you talking about? Rust was specifically designed to Not go down the path of C++, no pointers, safe arithmetic, safe memory usage.

        1. rust was a too-simple language based around a few poorly-thought-out ideas and every exposure to reality shows it to be a farce of verbose unreadability, or (to be kinder), a language that allows very stupid idioms and a large bulk of them. and as this is being discovered, the language is being reinvented in real-time, with all the deleterious effects of incompatibility and poor design (warts) you would expect from a language which wasn’t actually designed until after you found out what was wrong with the original pipe dream.

          you know, like C++.

      2. There is absolutely no runtime error handler.
        There is no exception mechanism. There is nothing even remotely similar to exception mechanisms.
        And I mean it… there is no goto out of function except of normal return. Just your misunderstanding.

        1. maybe i’m being pedantic? i honestly believe there’s no way to learn from other languages without understanding them… where do you think “When it does fail, you get a nice stack trace:” comes from? do you have some definition of the stack trace function that is somehow distinct in your mind from the concept “runtime”? i don’t mean to demand that you endorse my definitions but when comparing languages, being able to compare runtimes is very valuable…and you can’t talk seriously with someone who keeps insisting that the runtime doesn’t exist, when it clearly does.

          as for exception…Sandro said “try” means “if errror return error”. that’s an exception mechanism. it might or might not be an elegant exception mechanism but how can we compare one exception mechanism to another when one guy is insisting there’s no such thing?

          and again, “no goto out of function except of normal return”, well apparently there’s also the special error return from a failed “try”. maybe that’s great or not but how can we talk when you can’t describe?

          1. There is no special error. That is the main trick.

            Call it what you will but normal exceptions:
            – Can *generally* occur anywhere. Caller of some function does not control whether there can be an exception down in the call hierarchy.
            – Can *generally* have any type.
            – Are *generally* implemented by huge pre-generated lookup tables and a *mechanism to traverse those tables* i.e. the “runtime” which should result in proper calls of descriptors, collecting and possibly printing stack trace etc..

            Whereas in Zig:
            – You can only return something that is specified as a return value of your function whether is error or not!
            – There is no automatic forwarding of the error. You can do it using the try but the statement above is still true! No arbitrary types or auto-conversion is there.
            – There are no pre-generated tables.
            – There is no magic code for traversal of those non-existent tables. Just in place generated branches with no magic that you can clearly see and understand in disassembly. This also very good for possibility of ding worst case time calculations for hard real time systems.

          2. Very frustrating exchange. Special, normal, yada, yada.

            First off, I absolutely hate C++ as much as the next guy…specifically have hated implementing exception mechanisms 3 different times for C++ as a compiler / runtime guy. But if you’re calling the pre-generated EH tables “huge”, I don’t know where you’re coming from. In most modern implementations, it consists of a descriptor, on the order of 3 words, for each function call describing what should be done in the case of an exception throw. If there is clean-up to do, it points to the code to do that cleanup. As I understand it, that is almost identical to the size of the table generated by these Zig “try” statements.

            But I guess that’s getting ahead of another core misconception…these “try” statements are not somehow a “non-existent table”, they literally encode the table in executable code. One of the slick things about modern C++ implementations is that the table is encoded in non-executable code, which may not even be loaded into RAM or cache unless an exception is thrown. In this reference implementation of Zig, we take that same table but encode it as executable code. That’s fine, and obviously future implementations of Zig may chose to encode it as non-executable code. But its *hugeness* and its *existence as a table* is precisely identical. It’s a list of function call sites and clean up consequences. I’m not saying it isn’t better for important reasons, but to insist that it’s totally novel serves only to convince myself you don’t really understand what came before.

            As far as automatic forwarding of errors, is Java’s exception mechanism “normal exceptions”? In Java, the errors must be explicitly forwarded (a throw specification on the function itself). A few exceptions, such as null pointer error, *do* automatically forward. If I do a null pointer error in Zig, does that automatically forward? I imagine it forwards via a different SIGSEGV mechanism and you’d want to tell me that isn’t a mechanism at all, but it’s just a different mechanism to be compared with others. You might want to tell me it’s impossible in Zig, which is an interesting claim I’d bother to look into if I wasn’t already upset about so much hand-waving misuse of language from Zig advocates :(

            I’d like to look into a language that has a novel approach to safety but at this point the only reason I’d look at Zig is to find out how full of it the Zig crew is, and that’s not really an interesting project for me

  5. Racket used to be PLT Scheme and it’s 26 years old. Also see “DrScheme”, the integrated development environment that was tightly built around the language, because it is first and foremost an educational project.
    I certain loved the idea of teaching beginning programming students a Scheme-based functional language, with the hope good habits develop from it, but…
    including it in a list of “relative newcomers” would be disingenuous, unless the writer didn’t know about name changes from DrScheme and PLT Scheme to Racket.
    Since the beginning, and every few years since, I make a note that I should take a good look into it, and I’ve yet to do so. :( IDK, it’s never gained the traction it probably should have. Certainly at the beginning, Python and learning sources devoted to it were a competitor, and they just can’t beat the appeal of learning a language that is already widely in use, in “production”. It’s too bad.

        1. I personally feel sand-box vanity languages provide nongeneralizable skills, and ultimately do not provide the educational value the authors intended.
          However, the old joke a “GUI makes simple things easy, and difficult things impossible” still holds true.

          1. @X
            Many people care… the steaming pile of “dirt” that someone else has to maintain/redo later is not progress.

            All software is terrible, but some of it is useful. 16 pages of C# to replace an octave/R/Julia one liner, or nodejs dependency creep is not useful in the long-term.
            ;-)

          2. What happens to a program as it executes is far more interesting than what the source code looks like. Complex problems require complex solutions. An overly verbose mess that works well is much much better that a tight snippet that fails edge cases.

  6. The compiler is going to inline the end result if it can anyway, isn’t that what one of intel’s top people pointed out, humans can no longer out perform a compiler when it comes to optimisation in the vast majority of cases?

  7. It’s all fun and games until you find yourself working on automotive project where C99 is considered “modern”, MISRA:2012 compliance is strictly enforced and there are all the miscellaneous formalities of code review, testing, QA and whatnot. Then you realize how deeply out of touch with reality are all those articles about “this hot new programming language that will replace C”.

    The ugly truth is that C has been with us for almost 50 years and I bet it’ll still be around in year 2050. It has flaws and can be dangerous but it’s not going anywhere because it stood the test of time, language and its flaws are well known, there’s plenty of supporting tools for it… hell I could go on for much longer but I’m sure you get the point.

    1. I’ve got a nice book from 1999 that goes over a number of exciting new languages that promised to make C obsolete.

      None of them went anywhere, and I’m sitting here writing C89 for a living.

      Today we’ve got people claiming C will be made obsolete by Zig, Rust, Python, JavaScript, Go, and a whole bunch of others. I’m not holding my breath.

  8. ‘”Adjustable runtime safety” means that many of the runtime checks that Zig has can be turned on or off depending on how much you think you know what you are doing”‘

    FTFY

    Also an interesting read, and it will be nice to see how this language gets adapted over time. It seems we are in a new era of language wars.

  9. Oh good, what we really need is yet another programming language, like we need more competing Linux distros or more SBC’s named after increasingly rare fruit…

    Is there a certain point in people’s lives when they’re learning how to code that they decide C or C++ is just too hard and the correct response to this is to create their own programming language with blackjack and hookers?

    C is simple and it’s as safe as YOU write it to be, plus las time I checked it’s still Turing complete.

    1. > Is there a certain point in people’s lives when they’re learning how to code that they decide C or C++ is just too hard and the correct response to this is to create their own programming language with blackjack and hookers?

      Sure, the Nth time they spent days debugging memory leaks, or the Mth time they took days to setup up a build a native or cross-compilation process to work on a project, or the Jth time they had to reach for or got bit by template or preprocessor metaprogramming or its harder to diagnose bugs, etc. These are all problems that are tackled by Zig.

      > C is simple and it’s as safe as YOU write it to be

      Sure, and juggling chainsaws is also as safe as you can juggle it. That doesn’t make it safe.

    2. Speaking of C, the previous company I worked for, we wrote a lot of SCADA application code, both master (control center) and slave (field RTUs – Real Time) . Basically end to end. All written in C. I don’t recall one memory ‘leak’. Yes there were buffer overwrites, both in allocated memory and stack which occasionally caused a fault we’d have to track down. But for the most part, the problems that we mostly ran into was algorithmic and timing within the multi-threaded applications and implemented protocols, or talking to the internal bus. I am in a different company today, but we are still running, maintaining, and enhancing as needed that same code base here (one reason I was hired). Runs 24×7 without a burp for years now. On the other hand, we used C++ and Delphi (object Pascal) on our user interface and database editing side and seemed to run into memory leaks there. Most times in packages we didn’t even write… grrrr. Harder to dig out as they were ‘hidden’ from you due to the ‘abstraction’ of object oriented programming. Currently, we use a big Java product (not ours) here that over time seems to suck up more memory until you just have to reboot the machine and start clean. So it goes.

      So from experience, as stated above, “C is simple and it’s as safe as YOU write it”. Simple as that.

      Granted, I do most of my work programming in Python these days due to its flexibility, readability, and fitting the current requirements in our Energy Management area in our Utility. Python makes it trivial (relatively) to manipulate spreadsheets, csv files, read/write json, connect to sftp, http, https, database access, GUIs with QT and Tkinter when needed, graphing, data analysis, etc. There is a module out there for just about anything one needs to do if not already built in with Python…

      At home I use Python too as it is handy. But still enjoy writing code in C, Pascal (my first ‘real’ language … after Basic) and some assembly. Yeah, Turbo Pascal, Turbo C, and Vax Pascal :) . Good times back then… Now gcc and freePascal.

  10. Nah, Sounds too much like ZigBee.

    I’m still burnt from spending so much on modules to be ready for whatever project I find online and now no-one talks about them anymore.

  11. V (Vlang) is really coming along. Of these C alternatives that are out there, this one hits much closer to the sweet spot. Interops really well with C, but is much easier and safer to use, to inclue being much more modern and can be used at a higher level.

Leave a Reply to GrawpCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.