C++20 Is Feature Complete; Here’s What Changes Are Coming

If you have an opinion about C++, chances are you either love it for its extensiveness and versatility, or you hate it for its bloated complexity and would rather stick to alternative languages on both sides of the spectrum. Either way, here’s your chance to form a new opinion about the language. The C++ standard committee has recently gathered to work on finalizing the language standard’s newest revision, C++20, deciding on all the new features that will come to C++’s next major release.

After C++17, this will be the sixth revision of the C++ standard, and the language has come a long way from its “being a superset of C” times. Frankly, when it comes to loving or hating the language, I haven’t fully made up my own mind about it yet. My biggest issue with it is that “programming in C++” can just mean so many different things nowadays, from a trivial “C with classes” style to writing code that will make Perl look like prose. C++ has become such a feature-rich and downright overwhelming language over all these years, and with all the additions coming with C++20, things won’t get easier. Although, they also won’t get harder. Well, at least not necessarily. I guess? Well, it’s complex, but that’s simply the nature of the language.

Anyway, the list of new features is long, combining all the specification proposals is even longer, and each and every one of these additions could fill its own, full-blown article. But to get a rough idea about what’s going to come to C++ next year, let’s have a condensed look at some of these major new features, changes, and additions that will await us in C++20. From better type checking and compiler errors messages to Python-like string handling and plans to replace the #include system, there’s a lot at play here!

Making Things Safer

As a language, being more liberal and less restrictive on implementation details provides great flexibility for developers — along with a lot of potential for misunderstandings that are bound to result in bugs somewhere further down the road. It is to this day the biggest asset and weakness of C, and C++ still has enough similarities in its roots to follow along with it. Restrictions can surely help here, but adding restrictions tends to be an unpopular option. The good thing is, C++ has compromises in place that leave the flexibility on the language level, and adds the restrictions at the developer’s own discretion.

Compiler Advisory: Explicit Constants

Back in C++11, the constexpr keyword was introduced as an addition to a regular const declaration, defining a constant expression that can be evaluated at compile time. This opens up plenty of optimization opportunities for the compiler, but also makes it possible to declare that, for example, a function will return a constant value. That helps to more clearly show a function’s intent, avoiding some potential headaches in the future. Take the following example:

int foo() {
    return 123;
}

constexpr int bar() {
    return 123;
}

const int first = foo();
const int second = bar();

While there is technically no difference between these two functions, and either one will return a constant value that will be valid to assign to a const variable, bar() will make this fact explicitly clear. In the case of foo(), it’s really more of a coincidental side effect, and without full context, it is not obvious that the function’s return value is supposed to be a constant. Using constexpr eliminates any doubt here and avoids possible accidental side effects, which will make the code more stable in the long run.

Having already been in place for a while, constexpr has seen a few improvements over the years, and will see some more with C++20, especially in terms of removing previously existing limitations on their usage. Most the new standard allows virtual constexpr functions, developers can use try / catch inside constexpr (provided no exceptions are thrown from within), and it’s possible to change members inside of a union.

On top of that, both std::string and std::vector as well as a bunch of other previously missing places in the standard library will fully utilize constexpr. Oh, and if you want to check if a piece of code is actually executed within a constant evaluation, you will be able to do so using std::is_constant_evaluated() which returns a boolean value accordingly.

Note that constexpr code states that it can be evaluated at compile time and is therefore a valid constant expression, but it doesn’t necessarily have to, nor is it guaranteed that the evaluation will happen at compile time, but could be postponed to run time. This is mainly relevant for compiler optimization though and doesn’t affect the program’s behavior, but also shows that constexpr is primarily an intention marker.

constexpr int foo(int factor) {
    return 123 * factor;
}

const int const_factor = 10;
int non_const_factor = 20;

const int first = foo(const_factor);
const int second = foo(non_const_factor);

Here, first will be evaluated at compile time as all expressions and values involved are constants and as such known at compile time, while second will be evaluated at run time since non_const_factor itself is not a constant. It doesn’t change the fact though that foo() is still going to return a constant value, the compiler just can’t be sure yet which exact value that may be. To make sure the compiler will know the value, C++20 introduces the consteval keyword to declare a function as an immediate function. Declaring foo() as consteval instead of constexpr will now indeed cause an error. In fact, immediate functions are really only known at compile time, and as a consequence this turns the consteval functions into an alternative for macro functions.

At the other end of the constant expression verification strictness is the new constinit keyword that is mainly telling the compiler that an object will be statically initialized with a constant value. If you are familiar with the static initialization order fiasco, this is an attempt to solve the issue.

But constant expressions aren’t the only C++20 changes aimed at improving compile time validation, and the stability that comes with it.

The Concept Of Concepts

While technically not a completely new thing, Concepts have graduated from being an experimental feature to a full-fledged part of the language standard, allowing the addition of semantic constraints to templates, and ultimately making generic programming a hint more specific.

Somewhat related to type traits, Concepts make sure that data used within a template fulfill a specified set of criteria, and verifies this at the beginning of the compilation process. So as an example, instead of checking that an object is_integral, an object of type Integral is used. As a result, the compiler can provide a short and meaningful error message if the defined requirement of a concept isn’t met, instead of dumping walls of errors and warnings from somewhere deep within the template code itself that won’t make much sense without digging further into that code.

Apart from letting the compiler know what data is needed, it also shows rather clearly to other developers what data is expected, helping to avoid error messages in the first place, and avoids misunderstandings that lead to bugs later on. Going the other direction, Concepts can also be used to constrain the return type of template functions, limiting variables to a Concept rather than a generic auto type, which can be considered at C++’s void * return type.

Some basic Concepts will be provided in the standard library, and if you don’t want to wait for updated compilers, GCC has the experimental Concepts implemented since version 6 and you can enable them with the -fconcepts command line parameter. Note that in the initial draft and current reference documentation, Concept names were defined using CamelCase, but they will be changed to snake_case to preserve consistency with all other standard identifiers.

Ranges Are The New Iterators

Ranges are essentially iterators that cover a sequence of values in collections such as lists or vectors, but instead of constantly dragging the beginning and end of the iterator around, ranges just keep them around internally.

Just as Concepts, Ranges have also moved from experimental state to the language standard in C++20, which isn’t much of a coincidence as Ranges depend on Concepts and uses them to improve the old iterator handling by making it possible to add constraints to the handled values, with the same benefits. On top of constraining value types, Ranges introduce Views as a special form of a range, which allows data manipulation or filtering on a range, returning a modified version of the initial range’s data as yet another range. This allows them to be chained together. Say you have a vector of integers and you want to retrieve all even values in their squared form — ranges and views can get you there.

With all of these changes, the compiler will be of a lot more assistance for type checking and will present more useful error messages.

String Formatting

Speaking of error messages, or well, output in general, following the proposal of its author the libfmt library will be integrated into the language standard as std::format. Essentially this provides Python’s string formatting functionality! Compared to the whole clumsiness of the cout shifting business, and the fact that using printf() in the context of C++ just feeling somewhat wrong, this is definitely a welcomed addition.

While the Python style formatting offers pretty much the same functionality as printf(), just in a different format string syntax, it eliminates a few redundancies and offers some useful additions, such as binary integer representation, and centered output with or without fill characters. However, the biggest advantage is the possibility to define formatting rules for custom types, on the surface this is like Python’s __str__() or Java’s toString() methods, but it also adds custom formatting types along the way.

Take strftime() as example — albeit it this is a C function, which behaves as snprintf(), the difference is that it defines custom, time-specific conversion characters for its format string, and expects struct tm as argument. With the right implementation, std::format could be extended to behave just like that, which is in fact what the upcoming addition to the std::chrono library is going to do.

Source Location

While we’re on the subject of nicely formatting in convenient ways, another experimental feature coming to C++20 is the source_location functionality, providing convenient access to the file name, line number, or function name from the current call context. In combination with std::format, a prime candidate for implementing a custom logging function, and practically a modern alternative to preprocessor macros like __FILE__ and __LINE__.

Modules

It appears that slowly eliminating use of the preprocessor is a long-term goal in the future of C++, with consteval essentially replacing macro functions, source_location obsoleting one of the most commonly used macros, and on top of all that: modules, a new way to split up source code that aims to eventually replace the whole #include system.

While some say it’s long overdue, others see the addition of modules at this point rather critical, and some developers have stated their concerns about the current state. Whatever your own opinion is on the subject, it’s safe that say that this is a major change to the whole essence of the language, but at the same time a complex enough undertaking that won’t just happen over night. Time will tell where modules will actually end up. If you’re curious and want to have a look at it already, both GCC and Clang already have module support to some extent.

But wait, there is more!

Everything Else

The list just goes on, with Coroutines as one more major feature that will be added to C++20.

As for all the rest, there will be

Don’t worry though, the parts that really matter to be volatile won’t change.

So all in all, a lot is coming to C++, and some features are sure worthy to be excited about.

Of course, some of these new features and extensions have been around in other languages for ages, if not even from the beginning. It’s interesting to see though how some of these languages that were once influenced by C++ are now influencing the very future of C++ itself.

94 thoughts on “C++20 Is Feature Complete; Here’s What Changes Are Coming

  1. For plain C as an embedded system developer I wish there where fractional datatypes natively supported like (fix32_8) where this is a signed 32 bit number with its fractional point is between bit 8 and 7.

    1. You don’t need to do anything differently to perform arithmetic this way. You can just comment that a variable holds a value to be a certain number of milli-units and when you print it, put a decimal point in the right place.

      1. In some cases using printf() or sprintf() will make code too big or unnecessary complex. And I don’t think it would work with LED displays. In such situations I make my own formatting function tailored to my needs…

      2. “You don’t need to do anything differently to perform arithmetic this way.”…

        Yes you do!!!

        Try adding or subtracting two fixed point variables which have the point in different places or even multiply or divide two fixed point variables that have the point in the same place.

        1. No, you don’t You just multiply or divide by 10^n in addition to the operation you want to perform. Or you just write your program in such a way, that you never need to use fraction until you need to display the result. This is my preferred way of dealing with fractions. The one time I didn’t do it that way, I wrote my own function to perform one operation I needed on floating point number.

          For example let’s assume we are measuring voltage and we want to display it in form of x.xxxV on LED display. We have result in millivolts and need to separate it into single digits. I would do it by first making a temporary variable and four char variables (or two chars, each divided into upper and lower 4 bit variables just to save two bytes). The code would go like this:

          temp = result /10;
          digit1 = result – (temp * 10);
          temp = result /100;
          digit2 = result – (temp * 100) – digit1;
          temp = result /1000;
          digit3 = result – (temp * 1000) – digit1 – (digit2*10);
          digit4 = result / 1000;

          This is simplest, ugly solution I can think of. It exploits the fact that dividing fixed point numbers in C (and in many other languages) sends the fractional part to null space. Function that drives LED display goes from left to right and adds dot between third and fourth digit. V one must paint on the case…

          1. “No, you don’t You just multiply or divide by 10^n in addition to the operation you want to perform. Or you just write your program in such a way, that you never need to use fraction until you need to display the result”…

            What happens if you multiply:
            000011.00
            000001.10

            (the point being fixed and only shown for clarification)

            you get:
            010010.00

            but the correct answer should be:
            000100.10

            Fixed point math is more than just about adding and converting to base 10 for output.

            If the language is going to support fixed point math it has to take care of alignment. It also needs to be able to work with different precisions (e.g. 000000.00 + 0000.0000) in the same way you expect to be able to add a signed short integer to a signed long integer or an integer to a float

          2. Osprey, you did your math wrong. Let’s look at this example (commas added for clarification):
            12,34 x 56,78
            Firstly you multiply 1234 x 5678, which gives you 7006652. Then you divide that number by 10*4. Or move the comma four positions to the right, and you arrive at solution:
            12,34 x 56,78 = 700,6652.
            You always move the comma by the number of places it is in numbers you multiply or divide. Other example:
            741 / 2,5.
            741 / 25 = 29,64. This is division so shift the comma one place right and you get 296,4. Just keep track of the length of the fractional part and you’re good to go.

          3. “Osprey, you did your math wrong. Let’s look at this example (commas added for clarification):
            12,34 x 56,78
            Firstly you multiply 1234 x 5678, which gives you 7006652. Then you divide that number by 10*4. Or move the comma four positions to the right, and you arrive at solution:
            12,34 x 56,78 = 700,6652.
            You always move the comma by the number of places it is in numbers you multiply or divide.”…

            No Moryc, I really haven’t.

            the original assertion was:
            “You don’t need to do anything differently to perform arithmetic this way.”…

            by this I understand the assertion to mean:
            X = A * B
            produces the correct result if X, A and B are all fixed point numbers with the same precision. And even more to the point the underlying hardware which does integer multiplication works correctly regardless of the fixed point.

            Now you are saying that you need to compensate by dividing by 10^n.

            so your expression becomes:

            X = A * B / 10^n

            (ignoring overflow) why on earth would you piss about using a “decimal” point and dividing by 10^n on a machine with a binary architecture (which most are these days), when it is hugely more efficient to use a “binary” point and divide by 2^M instead (which literally translates to right shift by M places).

            I’ve heard people argue that they get more accurate results but most of the time they lose that accuracy somewhere along the line and would have had better results simply using floating point calculations.

            Moreover, the whole point of having the language support fixed point math is that the programmer does not need to worry about the adjustment after the result is calculated.

        1. Trig functions generate a floating point output anyway, so you just convert the input into floating point in the first place. If you can’t afford the float conversion, you use a lookup table. and interpolate between.

          And you’re wrong about roots. If you represent a fractional number with 24 bit integer/8 bit fractional, that means you’re really representing (256*number), and obviously roots distribute. So for instance the square root of that number would just give you sqrt(256)*sqrt(number), or 16*sqrt(number).

    2. I wouldn’t add that to the language. Instead, I’d add an include file to the library, and define two template types, each with 3 parameters:
      – The total number of bits.
      – The number of bits before the binary point.
      – The number of bits after the binary point.
      One of the template types would be unsigned, and the other signed.

      I expect usually the number of bits would be 8, 16, 32, 64, … and the possible sign bit and the last two parameters would add up to that.
      If the possible sign bit and the last two parameters added up to the first parameter, there would be no fill.
      If it was less, there would be zero fill or sign fill.
      Greater would be an error.

      Examples based on your suggested name:
      typedef signed_fixed fix32_8;
      typedef unsigned_fixed ufix32_8;
      typedef signed_fixed fix8_3;

      Most of the operations would be very easy to implement. Some would need binary point alignment. Some of those and some others would return larger results, which assignment, casts, etc. would narrow to the desired result size.

      You would need constructors and casts to and from builtin types, and input and output.

      I suspect that’s all an embedded system developer would really need. Other users might like full math library. I’d put that in a separate include file that for convenience would include the first one.

      You could roll your own, but having this in the library would be best..

      Templates are one of the good things in C++.

      1. It does happen, but rarely. Eg. auto_ptr was deprecated in C++11, and removed in C++17. I do wish some compiler vendor would add a “modern” mode, that disallowed obsolete usages and conventions, enforced the C++ Core Guidelines etc. Of course it will never happen, as everyone has their own legacy feature that they think is absolutely critical to include.

          1. Clang tidy (fast) and clang static analyzer have “modernize-” and coreguidelines switches. So the things you ask for do already exist (and I use them at my work place).

  2. Amen (removing features)…

    I still use just straight C for most of my work. My background is real-time systems and embedded systems. I currently work with a commercial system that was built with Fortran/C++ and it is a nightmare to dig information/understanding out of it. If people would just keep it simple….

  3. C++ has bloated beyond all recognition and is completely incomprehensible to the average programmer (myself included). I started using the language in the early 80’s when it was still called “C with objects” and it came to my lab on a reel of tape (really) that we had to cross compile from a Sun machine to an IBM AIX pc to get it working. The language was simple to understand (if you already knew C) and contained few “surprises” for the unwary. But as it grew over time it became harder and harder to pick up somebody else’s code and understand what was going on. Operator overloading, multiple inheritance, templates, etc. made source code completely opaque. You couldn’t understand what a piece of code did by just reading it. Coupled with horrendous multi-line error messages if you made a simple typo only made things harder. I’m sure all the new features are very powerful and exotic, but I’m happy to be retired 30 years later so I don’t have to deal with this stuff anymore.

    1. Many modern(-ish) open source projects are done in C++, and most of those developers come from lands of Linux. In lands of Linux source code is all the documentation one needs. So if you want to make your project open source, but still make source useless to others, just write it in C++ and use as many odd features as possible. It’s like programming in Klingon, but worse – Klingon makes sense.

      And the only programming language, where source code was viable and readable documentation was COBOL…

        1. This open but obfuscated source problem happens usually by accident, and not by design. Or rather by flawed design of both the software and language. One can make obfuscated code in probably every programming language in existence, but only in C++, and its demented offspring languages it’s easier to write confusing gibberish that works than readable and understandable code. Also code should never be used as documentation. There is reason for both documentation and built-in commenting system in all major programming languages…

          1. “Also code should never be used as documentation”…

            show me a compiler that understands comments and gives error messages when the comments don’t match the code and I’ll agree with you. Until then I’d rather work my way through the code to determine what the program is meant to do. All to often I’ve found that comments are at best wishful thinking and at worst a complete misdirection.

      1. Please specific in saying Linux source code I’m afraid there’s no such thing on earth the only thing does exist is Linux kernel a very huge source code but in straight C not C++I’m not lucky enough to see Linux kernel and Linux driver in C++, could you plz give me a URL you got Linux source code for kernel & driver in C++

        1. I removed Perl/Batch/VB from our systems in our department and replaced all with Python 3. So much better to maintain. Took me a couple of years off and on though! That said our Vendor’s Energy Management System uses Perl extensively yet, so I still have to deal with it there.

    2. I don’t think C++ is to blame. I’ve seen the same issue with C code happening. It has more to do with what I call “Java programming”. People are taught to use abstract interfaces, factories and encapsulation. And make everything re-usable.
      But they are not taught when NOT to use them, and now to design and build an actual application.

      But think C++ is bad? Try debugging some javascript library that you didn’t make yourself. At least with C++ you know how your objects look, what functions you have and what parameters they accept. With javascript it’s just a wasteland waiting for you to look the other way and fuck things up.

    3. I remember when I was writing a 3D math library, and thought about using operator overloading like is the popular thing to do to show your C++ skills, but in the end it was more readable to just make a bunch of static functions for matrix/vector multiplication. Anyhow, you end up using SSE instructions in those functions anyway, so that’s complicated enough. Anyway, I tend to believe in just using C++ for objects, and stopping there.

  4. I’m officially given up on C++. It is the most unnecessarily complicated and long-winded language that I have ever used and I’ve been using it since 1998.

    Instead I’d like to see more effort and focus being put into C. I’d welcome additions to the C standard library like memory managed arrays, smart pointers, lambdas, networking, threading e.t.c.

      1. No I say take the stuff about the new C++ standards that are good and integrate them into C without things like templates, streams, operator overloading, classes (yes no OOP for me!), inheritance, polymorphism, friend functions e.t.c.

        The two things that I miss the most in C are memory managed arrays, smart pointers. Networking and threading would be nice but I guess we have a ton of libraries that do these in C.

          1. you don’t need templates to implement smart pointers, just make sure that any object pointed to is derived from a base class that the smart pointer handles. Recovering the pointer to the actual derived object is trivial using virtual functions. I’ve been doing this since 1996 without templates and not had any issues.

          2. @Shannon

            “you’ve not been doing that in C though.”…

            What’s your point? I clearly stated that “you don’t need templates to implement smart pointers, just make sure that any object pointed to is derived from a base class”… Clearly this implies the use of C++. Where does C come into it?

        1. Those features do not exist out of pure boredom, but should be used to reduce the complexity of certain problem solving. The art is to learn which and when not. E.g. the Java point of view of “oop above all” is just retarded, but so is e.g. not using templates for type independent code.

    1. why not use this subset of c++?
      many of the features you listed are impossible to implement without destructors and if you add those and templates so they’re actually usable with user defined types you are pretty close to c++

    2. There are a few simple options in C for gnu systems
      https://developer.gnome.org/glib/stable/index.html

      The only problem I see with >c++11 was people breaking cross-compatibility in fantastically complex subtle ways.
      I often ponder when it will be indistinguishable for JavaScript syntax…
      ;-)

      “Julia” has won my admiration for balancing high-level design clarity with near C/C++ performance
      https://docs.julialang.org/en/v1/manual/getting-started/

    3. Why would you even try to advance a language that is leagues behind in actually everything beyond pure compiler availability? C is for the electrical engineers that consider it to be the holy grail of computer science (not having the slightest idea about the later). I’d rather leave them to it.

  5. And people ask me why I don’t like C++….

    But most of these additions seems to just make it more bloated and even more confusing.
    At least the idea being the use of concepts could in the best case lead to more understandable error messages. Worst case, it will just lead to even more error messages about misuse of the concepts themselves…

    But C++ is far from a language to recommend to beginners in programming, frankly speaking, even avid programmers can’t at times use it, due to its arcane nature.

    Honestly makes me wonder if it is worth the extra hassle of using it, compared to more syntax friendly languages with a less cluttered feature set.

  6. I have mixed feelings about this. C++ seems to be already a really powerfull and so big language with makes it difficult to fully understand. C has it’s limitations but it’s a small language, you can understand it completly (it’s not that easy and i don’t but i think it’s possible).
    The “bible” for C is the K&R, it’s not completly up to date (iirc no C99) but still… It has a bit more than 250 pages. How many pages does the “bible” (i suppose it exists) for C++ already has?

    1. I see this “compared with C” done a lot. And frankly. It’s stupid.

      “C” is only the core language. “C++” has a core language, and a substantial standard library to pull from.
      So a proper compare would be “C” + “Posix” with “C++”, and how many manual pages does posix have?

      1. > So a proper compare would be “C” + “Posix” with “C++”, and how many manual pages does posix have?

        Fewer than C++ for the limited subset of functionality available in POSIX but not in C++. Also, you appear to believe that we’re comparing C (the core language) with C++ (stdlibs and all). This is not true: we’re comparing C (the core language) with C++ (the core language), and C++ is many times more complex than C (the core language).

  7. “…and the language has come a long way from its “being a superset of C” times.”

    No, it has not. Have been doing the C++ dance since its first (non-formal) ISO formalization in 1998. And the policy of all of my employers during the previous 20+ years has been to use C++ as ‘C with classes’.

    The 2017 version of ISO14882 caused some consternation within the embedded community, and is my opinion that the so-called ‘C++20’ will be largely ignored by the hardware-oriented engineers. Still failing to fix and/or formally define too much stuff. Miss the days when the committee had people such as Plauger (and he resigned because the children were chasing the shiny new baubles without addressing fundamental issues).

    If you are a 14882 committee member, please take a long walk off of a short pier.

    1. It’s not exactly the committees fault that employers have no idea how to use C++ decently. I think it’s more a generational problem. Too many people in software project management that have no damn idea about computer science and software design (and no educational background in it).

  8. Can someone explain to me, why would anyone use a function that always returns the same thing, and even use special keyword to make it constant at compile time? What are the use cases for such stupid thing?

    1. Replacing less-than-clear macros is the first one that comes to mind.
      With this new addition, one can also have it do complex operations on a fixed input in a clear and concise manner (such as but not limited to building a string, or calculating offsets). Sure, it’s doable with a macro, but a lot of the tricks macros need to use for building complex constants from simple parts are less than readable (see: building a version string from #defined macro constants) whereas with a constexpr you’re literally writing the same code as the rest of the program. And since constexpr is simply a hint that a function should have no external side effects, the compiler has no problems with using the same function with non-fixed inputs too.

      Can macros do all this already? Sure. Can they do so in a clear, easily readable, hard-to-mess-up manner like writing a plain ol’ function? Only the trivial ones.

          1. I truly despair at this generation’s click and build paradigm. All these fancy IDEs, jazzed up languages and spoon fed “patterns”. In the old days we had to think (it’s not so hard) and come up with solutions based on all the tools available and not be constrained by just the one “godzilla”.

            You don’t need to “extend the preprocessor to allow for loops” you are free to use other tools to pre-process your C programs. There are things like m4, sed, awk, perl and php. And if these are not to your liking you can even use lex and yacc (flex and bison) to build a simple (or complex if you really want) pre-compiler.

            Remember, the early C++ compilers generated C code which had to be compiled with a C compiler.

          2. Actually a reply to “osprey” (can’t hit ‘”reply” there):

            The toolstack you suggest, m4, sed, awk, perl and php is complete unmaintainable mess. What you call “fancy” IDEs is actually the automation of repetetive tasks that are error prone as a human, be it trivial code generation or just basic type inference displayed or styles enforced. Their intent is to enforce consistency to some extent, so devs don’t run of in pure ego-driven overcomplex solutions.

            If you even need pre processor hacks for something as trivial as a for-loop, then something is wrong.

            There is also a reason why no “modern” language would ever have the stupid idea to promote a purely lexical pre processor with conditions as a language feature. It makes code compilation in itself stateful, e.g. due to pre processor state needing to be expanded along the whole dependency graph of a translation unit. That alone outright kills compilation efficiency and robustness (one of the core issues of C++).

            C/C++ had the excuse to be developed in a time where machine resources were scarce (leading to disgusting hacks like the preprocessor) and where we had not too much knowledge on how to properly design languages and compilers. We are still not toally great in language design, but a hell lot better in compiler design. It’s time to move on.

            If C++ had a macro system that actually deserved the name (e.g. similar to lips, rust, nim) it would not require preprocessor hacks nor external code generators. Well, yes, of course, there might always be reasons for an external code generator, but they are rare.

    2. I use it calculate crc32 for string literals at compile-time. This allows me to easily use strings in switches. Then there is the whole metaprogramming angle. It’s not for everyday stuff but it’s useful for experienced programmers.

    3. Ever seen a program with a bunch of weird things #define’d as macros at the top with limited documentation on how they were calculated? Constexpr lets you calculate those at compile time from their constituent pieces. That makes them easier to document and modify in the future as well as providing a mechanism for enforcing their types.

    4. A constexpre function doesn’t always return the “same value”. It returns a value that doesn’t change at runtime. If you want the fibonacci number for values known at compile time you’d use a constexpr function. The compiler runs the code, returning the number.

      You can also execute the function during run time for changing values.

    5. “why would anyone use a function that always returns the same thing,”…

      What you are missing here is that the function can be called in several different places with different arguments.

      e.g.

      x = sqrt(4)

      y = sqrt(9)

      z = sqrt(16)

      the function sqrt() always returns the same result but this depends on the argument you pass to it.

      In some situations you might not know what the argument is but you do know it is a constant.

      Consider how beneficial it would be to use such a function on a CPU where something like sqrt is not supported in hardware but needs to be computed the long way in software.

    6. Just thought of a better example of something better with constexpr

      Imagine a filter library that instead of requiring you to calculate the filter constants ahead of time calculated them at compile time from the desired filter characteristics. Instead of having a bunch of magic numbers with some documentation that may or may not be correct, you build your filter by calling a constexpr constructor that you give a filter type, desired order, corner frequency or frequencies, and does all the mathematical heavy lifting at compile time.

  9. I’m currently using C++14 on the desktop, and C++11 for microcontrollers, and I hope I’ll never have to write in C again.
    C++11 was the largest revolution in the language, the next revisions just added more refinements. I’ll look into C++20 when compiler support becomes good.

  10. “std::format. Essentially this provides Python’s string formatting functionality! Compared to the whole clumsiness of the cout shifting business, and the fact that using printf() in the context of C++ just feeling somewhat wrong, this is definitely a welcomed addition… While the Python style formatting offers pretty much the same functionality as printf(), just in a different format string syntax…”

    OK, so it offers the same functionality of printf(), but using printf() is “just feeling somewhat wrong”?

    This language is going down the same path as Java, rapidly becoming a completely unrecognizable different language that does nothing better except add worthless academic features evangelized by clueless hipsters.

    1. In my opinion count shifting business ‘feels very very wrong’. printf and puts feel right. The above comment on “use C++ as ‘C with classes’.” … now that feels right as there is a place for classes, just don’t go overboard. ‘Keep it simple stupid’ should always be the motto of programmers.

  11. printf is not type safe, it’s arcane, it’s error prone…

    FFS, how many of these “rockstar coders” does it take to screw in a light bulb?

    If there was EVER a complaint about it for all these decades, then fix what’s there. Don’t bodge in more half-baked hipster tech under the ruse of “it’s new, it’s good” and “I understand it, you don’t”. Java is rapidly dying because of this mentality and so will C++ if allowed to go down the same path. Just wait for it…

    1. Here’s the tricky bit you appear to be ignoring — all the existing programs written with printf() have to continue to work. I don’t see how you can fix printf()’s many flaws without breaking all of the existing programs that use it.

  12. “Note that constexpr code states that it can be evaluated at compile time and is therefore a valid constant expression, but it doesn’t necessarily have to, nor is it guaranteed that the evaluation will happen at compile time, but could be postponed to run time. This is mainly relevant for compiler optimization though and doesn’t affect the program’s behavior, but also shows that constexpr is primarily an intention marker.”

    No, that is not its primary goal. It would be nearly useless if that were the case. The main point of constexpr is to extend and improve C++’s metaprogramming facilities. It’s not secret that template metaprogramming sucks: it’s slow, verbose, incredibly tricky to get right, “write once” spaghetti code. The reason why constexpr seems able to do more and more with each iteration of the standard is that the committee want people to have to rely less and less on template metaprogramming. What used to be a SFINAE-reliant overload set can be turned into a more readable and practical “if constexpr” statement. Now, with constexpr strings and vectors (and the constexpr dynamic allocation that enables them), the floodgates begin to open. Pretty soon we’ll have sophisticated metaprograms written very readably in terms of STL algorithms, with only the odd strategically placed “constexpr” keyword to distinguish them from their ordinary runtime counterparts. This is huge, and none of it has anything to do with intent. Intent is just a happy byproduct.

  13. I’d rather program in Python, but would use D rather than C++. If I wanted something more useful to a career, I’d look at Julia. C++ and Java are what made me quit programming, for c++ has horrid syntax and Java is so damn bloated.

    1. At work now, a lot of the every day tasks I am involved with is handled by Python 3. As long as we don’t run into performance issues, Python 3 gets the nod. Even the Engineers can understand what you’ve written even if they have limited programming experience. Now the large EMS system we manage is written in C++/Fortran. Not fun to track down issues. The front end processor (protocol translator basically) to the EMS system was written in C and is much easier to maintain.

      On a historical note, I was excited when we were able to write ‘applets’ (real applications) in the old browsers…. It was going to be a whole new world interfacing to the back-end scada systems with a browser as the user interface… Then after a couple years companies started blocking applets from running…. And that was that. Java went out of favor here. Project scrapped.

    2. That is a quite weird selection of languages. I was active in the D community for a while. It has similar shortcommings as C++, albeit with neater syntax and actual module system. E.g. back then (no idea about today) it enforced the usage of a GC, which makes it not too useful fo ra number of scenarios). The fact you even think about Julia tells me your background might be more scientific. If it’s only about calculation problems, you were even better of with Fortran than C++ back then, because OOP is not even a remote fit for numeric problems.
      If you happen to be doing tiny experiment solvers with something like ROOT(Fit) you are better off with pyton backed by some C/C++ libraries.

      If you care about determinism and decent core performance you might want to look into rust. It’s community is much nicer than that of C++ or Java.

      If you want to use JVM languages but hate “bloatiness”, take a look at Scala.

  14. Your opening discussion on constexpr often makes no sense and is wrong when it does.
    Makes it ex0licitly clear that bar returns a constant??! An rvalue of a primitive type is always constant, and if returning a class type you would use const to make the returned value be a constant.
    The variable first is constant, and is initialized in a sequence of such initializers before main is called. If the body of foo is visible, an optimizer might punch that in to the loaded image instead. The difference is that second WILL be known at program load time before running initializers. And bar() can be used in array sizes, template arguments, etc.

    You may be confusing “function returns a constant value” (meaning: return type is const) with the idea of a _pure_ function, like sin(x).

  15. There is an error in the article. camelCase is incorrectly listed as “CamelCase” when in fact that is PascalCase. True camelCase is the standard convention taught to Java programmers and has the first letter lowercase. You can verify with a simple Google search.

    1. I don’t think it’s necessarily wrong calling it “camel case” when the first letter is capitalized. It can be either. Java uses upper case for the first letter in class names and lower case first letter for methods and variables.

  16. I’ve seen the C++ evolutions in the latest years, and while it is becoming always more powerful, it is also becoming always more confused: too many features, from low level to high level, that now seems a mix of C# and C. Still love it, though

  17. C++ had became so polluted. I used to love c++ especially when smart pointers came out. Now, it is a complete mess.

    Our engineering team jumped to go and ported c++ codes to go for our main product. Native compilation with gc, goroutines threading, getting packages from web is very easy, and cross compilation is a breeze. golang will kill c++.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.