In the Holy Programming Language Wars, the lingua franca of system programming – also known as C – is often lambasted for being unsecure, error-prone, and plagued with more types of behavior that are undefined than ones that are defined by the C standards. Many programming languages were said to be ‘C killers’, yet C is still alive today. That didn’t stop the US White House’s Office of the National Cyber Director (ONCD) from putting out a report in which both C and C++ got lambasted for being ‘unsafe’ when it came to memory management.
The full report (PDF) is pretty light on technical details, while citing only blog posts by Microsoft and Google as its ‘expert sources’. The claim that memory safety issues are the primary cause of CVEs is not substantiated, or at least ignores the severity of CVEs when looking at the CISA statistics for active exploits. Beyond this call for ‘memory safety’, the report then goes on to effectively call for more testing and validation, while kicking in doors that were opened back in the 1970s already with the Steelman requirements and the High Order Language Working Group (HOLWG) of 1975.
What truly is the impact and factual basis of the ONCD report?
CVE Quality Not Quantity
Perhaps the most vexing of the claims made repeatedly in the ONCD report – as well as the longer, but very similar report by the NSA, CISA and others titled The Case for Memory Safe Roadmaps – is that of memory safety issues being the primary issue. These are claims which seem to always come back to reports by Microsoft and Google, rather than the list of actively exploited CVEs, all of which feature prominently in e.g. the 2023 report on 2022’s top 12 hit list with everyone’s favorite vulnerabilities, such as Log4j (CVE-2021-44228) featuring sloppy input validation, or three CVEs in Microsoft’s Exchange Server, hitting a triple whammy of Common Weakness Enumerations (CWEs).
Just like 2022’s chart leader (Fortinet SSL VPN) this includes CWE-22: the improper limitation of a pathname to a restricted directory. Exchange Server was also featured for CWE-918 (server-side request forgery, SSRF) and CWE-287 (Improper Authentication). Of these, memory safety issues can be a factor with CWE-287 (e.g. CVE-2021-35395), albeit very sporadically. The pattern with especially remote exploits (which is relevant with ‘cybersecurity’) is overwhelmingly with input validation and handling, which mostly involve omitted checks and logic errors.
Putting the focus on memory safety is more than a little suspect when the worst CVEs come from programmers not putting in basic checks for path traversal and forgetting to fully check user credentials. What is also worrying is the complete lack of any reference to the favorite language of the military, medical, and aviation fields where things going boom (prematurely) is generally considered a bad thing: Ada.
Steelman
As mentioned earlier, the Steelman requirements are the result of the foremost computer science experts at the time being asked to come up with the requirements that a high-level language would have to fulfill in order to be used for the most demanding tasks across the US DoD. In a 1997 comparison by David A. Wheeler of Ada 95, C89, C++, and Java, these languages’ adherence to the Steelman requirements is determined, with Ada obviously scoring the highest (93%), while Java comes in at 72%, C++ 68% and C at the back with 53%. Of note here is of course that since then all of these languages have received many updates to their respective standards, but it still provides a useful snapshot.
The lack of built-in concurrency support in C and C++ has been partially resolved at this point, with C++11’s standard memory model, but is hard to fully resolve without modifying the language’s foundations and rendering it incompatible with existing codebases. This is something that you can only do with something less fundamental like a scripting language, and even then it’s likely to upset a large part of the userbase for many years.
Where Ada scores very highly is not only with its concurrency handling, but also with its type system, which includes aspects such as parameters and return values. What often upsets novice Ada programmers who migrate from other languages is that they first have to set up their types with constraints, and this can seem time-consuming and unneeded. But as described in Steelman, these restrictions, along with code that’s stripped from as much ambiguity as possible, help to avoid programming mistakes.
Effectively, a good programming language knows what your intent is by setting restraints and offering the means to restrict functions and procedures using contract-based programming so that the compiler has as much context as possible. Meanwhile, the code should be written where possible in plain English, without cryptic symbols and easy to typo symbols that can e.g. turn a comparison into an assignment. This is also the reason why Ada is case-insensitive: why would coolVar
differ from coolvar
when it’s clear from the context what is meant?
Memory Safety Is Easy In C++
It’s rather amusing to read old DoD reports, such as a 1991 report (PDF) by the US Air Force called Ada and C++: A Business Case Analysis. This was written before C++ was standardized, but as part of a ‘make stuff cheaper to build’ push by the DoD, C++’s claim to have tacked many of Ada’s features onto C got investigated by multiple government branches, including the FAA. The conclusion then was that Ada was still by far the best choice, with clear signs that its strong code reusability and self-documenting code helped reduce maintenance costs.
Even so, Ada’s lack of popularity outside of the aforementioned fields has led to a dearth of Ada programmers, which has resulted in C++ ultimately being approved for DoD projects like the F-35 program, albeit with strong restrictions on acceptable code to bring it more into line with Ada. What this shows is perhaps that the problem is not so much C++, but more how you use it.
After all, C++ by itself has no major issues with memory management or a lot of undefined behavior as long as you keep away from its C compatibility syntax. With RAII (resource allocation is initialization) and encapsulating code into classes with well-tested constructors and destructors, you avoid many of the issues that plague C. Add to this C++’s standard template library (STL) with std::string
and containers that replace the nightmarish and error-prone C-style strings and arrays, and suddenly you have to try pretty hard to get code that has any memory-related issues, because simple buffer overruns no longer happen.
Of course, many who do programming for a while will be tempted by low-level optimizations, and end up writing things like a lock-free ring buffer and zero-copy RPC libraries using raw memory pointers. In such cases you will want to first of all have a solid understanding of how the underlying hardware works, and get really familiar with tools like Valgrind, The value of Valgrind in particular is hard to overestimate, as with a bit of effort you can analyze your code for memory safety, multi-threading issues (threads, mutexes, etc.) as well as memory usage.
Touch Of Magic
Perhaps in this era of instant-gratification LLM code generating tools and “cut/paste from Stack Overflow”, we have forgotten the most important thing of all about programming. Namely that it is an engineering discipline that requires planning, documenting, testing, and feeling like the more you learn, the less you know, and the more easy mistakes you make as you write more complicated programs. Speaking as someone who is gradually porting personal C++ projects to Ada, what I have found along the way is that as much as I like C++, there’s something about Ada that really excites me.
Sure, it is a bit of a pain to get used to dealing with the default immutable string types, and it’s all too easy to just reach for the package of predefined integer and float types to get that instant gratification, not to mention the hours staring at the Ada compiler output as it informs you of all the issues in your code. Yet once you’re over those first hurdles, and the program just runs without glitches or oddities like you’re used to with C++ code, that’s an almost magical feeling.
This is perhaps why the ONCD report feels so wrong, as it contains none of the lessons of the past, nor the hard-won experiences of those who write the code that keeps much of society up and running. You can almost hear the cries of many senior software engineers as they wonder whether they’re merely chopped liver in the eyes of government organizations, even as said organizations are kept running due to countless lines of Ada, COBOL, C and C++ code. Never mind the security researchers who despair as basic input validation is once again ignored in favor of buzzwords pushed by a couple large corporations.
Security and zero-trust.
https://blogs.vmware.com/cloud-foundation/2023/02/20/the-next-generation-in-data-center-security-smartnics-and-dpus/
This is about people using the language not the language. The tool is only as good as the user.
I agree but using c to program safe is hard I usually use assembly or sometimes binary
This whole trope of “just get better programmers” is a red herring, serving to bait people into a different argument.
There’s been huge CVE impact in high profile projects by senior devs whose code was peer reviewed. There’s been intentional backdoors from code specifically written to look like the intended purpose, but allowing also for an exploit (and this too isn’t caught in peer review).
When an argument exists that national security should rely on just getting better programmers and users, it’s a recipe for more of the same.
I don’t know what else is out there which works like Rust, but having that kind of static analysis *built-in* means that by the time it gets to code review, whole categories of defects simply don’t exist.
The amount of revising I see happening for embedded C code is insane and it’s precisely because footguns in C are a feature.
Hear, hear!
Down with bureaucrats pretending to be experts by referencing their private sector equivalents who are also pretending to be experts!
What really gets me are the politicians pretending to be expert bureaucrats by referencing their private sector manager cronies pretending to be experts. Both organizations have plenty of real expertize, but it tends to get too filtered to be much good at face value by the time it floats to the top. What I hear from this declaration is a desperate plea for people to use and therefore validate the effort put into some new programming languages spearheaded by a few industry heavy hitters and then of course have to upgrade all the support hardware and software to things that earn said companies money, some of which is inevitably slipped across the table. I’m always a but suspicious of calls for “out with the old, in with the new” when it seems the people making those calls are holding on to something else quite tightly.
Politicians aren’t saying this. Most politicians don’t care. This is coming from CERT (US and NZ), GCHQ (UK), NSA, ONI, NSF, NIH and so forth. These aren’t stupid bureaucrats. As someone who gets NSF grants to fund development I know for a fact that they are domain experts. This is getting pushed up to the politicians from below.
No one is going to stop you from writing in whatever language you want. This is advice and, even though I am almost exclusively a C developer, good advice.
You’re 100% correct, but the Edgelords want to run with their novel red herrings arguments.
When he’s arguing this originates with politicians, it’s untrue, he knows it is untrue, AND he wants you to know that he knows it’s untrue. But he’s on a quest to save the world for libertarianism I mean feudalism.
Bravo, Encore!
AMEN
“Effectively, a good programming language knows what your intent is by setting restraints and offering the means to restrict functions and procedures using contract-based programming so that the compiler has as much context as possible.”
I see computer architects as part of this, “knowing intent” and conveying that down to the programmers.
Bjarne Stroustrup kind of responded to a prior report from the NSA blasting C/C++’s lack of memory safety in a recent CppCon presentation: https://www.youtube.com/watch?v=I8UvQKvOSSw
I agree with some of his points (and Maya’s) that there’s a safe subset of C++, but with things like iterator invalidation and default-unsafe-array-operator access in standard collections like std::vector it can still be pretty easy to make a mistake.
Some of Stroustrup’s points include using things like gsl::owner to help with static analysis of some of the more rough cases. I’m not sure it’s quite there yet, with clang-tidy having some support, but last I checked there was nothing for g++.
Maybe it’s time to learn ADA? Was also interested a while back in proof-oriented programming language F*, used to produce the HACL* library containing formally verified cryptographic algorithms used in Firefox, but never got around to learning that either… Never enough time in the day!
FYI, clang-tidy is basically a linter or static analyzer with some automated fixes. It can be used in conjunction with whichever compiler you want to use.
Exactly. I’m sure if given enough attention, guidelines and tools assembler could be made safe. But most projects using C/C++ would benefit from using a memory safe language. There are few projects that can justify using C/C++ instead of something like Go, and for the rest there is Rust.
Even with Valgrind, asan, tsan, you have to ensure all code paths are covered. Even then you can have code that uses different offsets and doesn’t trigger access violations.
If I were C++ being associated with the F-35 would not be part of my PR campaign.
Bagging on the F-35 now is like bagging on AMRAAM in 1999. The designers have already gone the hard yards to make the aircraft work. If this is just a military-industrial complex thing, then carry on.
You do have some valid points, specifically in the the CVE metric is stupid, but I don’t see this as a red herring. Memory issues are an issue even with skilled detail oriented developers. Undefined behaviour, especially in C, is a real issue even if it is widely used. Heck, I just found out that some 15 year old code that I use has had ‘functional’ undefined behaviour in it for since it was first written and it only started to segfault since gcc 13.2.1. It still works under the latest version of clang. It’s not a security issue but it certainly is a functionality issue and failures in functionality can lead to security problems.
The government has been very concerned about the impact of poor programming practices – especially in the OSS world (not because it is OSS in and of itself but because of the varying quality of code often hidden in dependencies (e.g. OpenSSL)). They fund a lot of software development through the NIH, NSF, and other agencies and recent solicitations are much more focused on development best practices, security, supply chains, and so forth. The Securing the Software Supply Chain series, especially the Recommended Practices Guide for Developers shows that they’re not idiots. Same goes for The Case for Memory Safe Roadmaps (referenced in the above WH report). They know that moving everyone over to Rust or Go isn’t going to eliminate security issues but it would, inarguably, help reduce the attack surface significantly.
I’ve got about 20 years of C development experience and I don’t think C is going away. I’m not mad about this report because they’re not actually wrong.
I’m interested in the “development best practices” “supply chains.”
US Government tried to spec language for all contracts and subs once already. Ada. Didn’t stick.
Best practices are like standards and ‘design patterns’. So many.
Behind every ‘best practice’ there is a group of ‘experts’. Most don’t code. Got C-, last time they did, as undergrads. Ew gross, actual code…I’m a ‘Computer scientist’…We do math. (Even worse! CS out of business school…spit)
In the end you form you own heuristics.
I completely ignore:
Anybody who even speaks of database normalization past 3rd. The higher the normal form, the harder I ignore them. That’s a ‘best practice’ IMEO.
Anybody who speaks of ‘one true language’…Don’t ignore Java, just the twits, who are mostly gone into hiding by now. New handles are ‘religious’ pro newthing.
Anybody who justifies even a tiny scrap of Javascript on the server. For any reason, but particularly ‘because then everybody already has the tools in hand and can do full stack’. If you let Javascript people loose on your server, you have nobody to blame but yourself. Javascript least of problems at that point. (Javascript running as root because ‘bug’…’was difficult problem’…How to use Javascript to reboot bugged JS server out of memory?..simple)
Additions to list welcome.
This isn’t a government spec/mandate for some language that they invented .
They’re using their voice to nudge government projects to start paying heed to what private security folks have been saying forever now.
CVEs happen in largest part due to C and C++ having a hands off approach to memory. People can say it’s just bad coders, but these things were merged after senior peer reviews.
Rust and Go look really exciting (different use cases) and code reviews can not get bogged down in tracking safety.
so why aren’t they pushing rust? I’ve been writing c++ code since the 80s, and rust is the first language that has come along that has tempted me to change….
Because they don’t want rusty code.
They want transparency! Transparency is good! So they should mandate that all code – not just for the government, but private code too – written in the US of A is written in whitespace. That’ll keep them safe.
Didn’t read the linked report, huh?
One language to rule them all?
I thought the report did recommend Rust.
This is correct. Even the summary highlighted Rust first as a replacement to C and C++.
They are pushing Rust, however Rust is still evolving and still has some bare metal weaknesses that make it less attractive for some embedded real time applications. Rust is also not available on any architecture not supported by LLVM, such as the MicroBlaze.
What I find alarming is that Java is among the languages that they are pushing. There’s no secure JVM, no bare metal JVM.
Hey what’s a CVE? If you’re writing for a non-specialist audience, you need to define the jargon you use.
Common Vulnerabilities and Exposures?
He is writing for a specialist audience. Also it would have taken you less time to google it then it would to write this comment.
Get off your high horse.
Firstly, hackaday covers many topics, from electronics to programming. I get it in my daily newsfeed -and am not alone there- and this is one of the first times I had to look up an abbreviation.
Secondly, even when writing for experts, it is good practice to define all acronyms at first use.
Damn, you had to look something up? Awful. My sympathies to you and your family.
I disagree, he is writing in a hacking and tech publication, he IS writing for a specialist audience.
I think this time is in you to follow up and learn what this is.
Writing rigorously-tested-to-be-safe code is a different specialty from hacking, by definition.
No, its not. “Hacking” in its original sense simply meant finding the most optimal solution to a technical problem in order to save resources. That certainly does not imply development of unsafe code.
Cars are dangerous so outlaw cars, right?
I like this one: https://www.efinancialcareers.com/news/2022/09/c-programming-language-safety
Just use the JPL “Power of 10” rules when writing in C and everything is just fine.
https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Developing_Safety-Critical_Code
I thought it odd that the report seemed directed to programmers, probably not one of whom isn’t fully aware of the memory issues and methods (including alternative languages) to address them. Actually, the report must have been intended for mass consumption (which is where it was delivered) by nonprogrammers, presumably for political purposes. This explains why it omits the complexity of the issue (as this article describes), which practitioners would be familiar with. It is agenda-driven, not education-driven.
It is best to be suspect of documents emerging from political offices in an election cycle, and instead pay more attention to the national standards associations and professional publications that mainstream media ignores.
This! But I do think the target audience was managers and HR personnel, and less actual programming. That’s probably why it’s vague and throws out a bit of a boogie instead of producing the kind of white paper or conference reports that are the normal output of government – industry efforts to keep our software at least relatively safer.
“probably not one of whom isn’t fully aware of the memory issues and methods (including alternative languages) to address them.”
Lol. As if that prevents memory bugs and exploits.
It’s sorta directed at devs. It’s a red-letter document to place on top of the pile of evidence they hand to managers every time they say “no really, we’re way past due on upgrading the code from vb6/fortran 66/jacquard looms”
As someone who works with Fortran 66. I concur
I LOVE calculated gotos. Goto Intvar
Goto NextIter is how you bail on solution attempt.
NextIter = NextIter .OR. ReDoFuelShadowPrice
Change entry point Instead of setting flags. Beautiful.
Someone should dig that guy up and take him to Vegas.
To be completely honest, it was worse than that. Their offsets weren’t powers of 2 and they were adding and subtracting rather than OR/NOT.
I know of no other language besides FORTRAN with this gem. Makes function pointers look sane and reasonable. They had to record line numbers to goto them up at runtime.
Some sick multidimensional being needs to implement the calculated COMEFROM intvar
CERT is _not_ a political office.
I have memory issues, but they’re mostly due to my age and not because I’m written in C++
I’ve given up waiting for C++ to get it’s shit together.
It’s still far too easy to do things wrong and nearly zero good resources on how to do things right. It’s only *now* (like, the last few DAYS) that they’re even beginning to think about cataloguing all the ways you can run into undefined behaviour.
I’m going forward with rust because I’m not going to wait anymore. I don’t want to take the time to figure out how to do C++ “right” and what’s 20 years of bad practice. I want to use a language that is free of all this BS *today* and rust is it.
That is your right as a programmer as long as your employer goes along with it ;) . I’ve tried Rust at home and it sure slows down my productivity. I just wrote a c++ application for my company and it is working quite well. Our Energy Management System is written in C++ (huge code base). Our SCADA front end is written in ‘C’. All run 24×7 no problem. Anyway, I’ll stick with c/c++, thank you. One thing I don’t do is use ‘every’ C++ ‘feature’ . I stick with the basics even if code may not be as ‘refined’. KISS is my programming philosophy. And BTW, what you consider the ‘right’ way, may be the wrong way for someone else :) . A lot like what I call silly/stupid ‘{ }’ placement that Rust tries to enforce.
I’m not talking about syntactic quirks.
It is stupidly easy to write code in C++ which is fundamentally unsound (and only works by chance due to your combination of compiler version and platform) if you are not *intimately* familiar with the latest version of the standard.
For extra fun, many of these code patterns are ones that were considered *iodmatic* C++ only ten or fifteen years ago. The language has since moved on and raised the bar in terms of safety, but there is still a lot of code, examples, tutorials, and documentation out there advocating for stuff that is Super Bad (TM).
I don’t have time to try to comb through 20 years of language history to find out if something is *actually* safe or merely works because of a fluke of compiler version.
“All run 24×7 no problem.”
Is the software fuzz tested?
Taking a productivity hit at first is pretty normal when learning Rust. We estimated it takes about 6-12 weeks to get up to speed for experienced SWEs. A funny thing, though, is how many people have said “I’m now a better C/C++/Python/whatever programmer because I learned Rust.”
This feeling is true, and there’s probably truth to the feeling also.
As someone who learned on Perl 4, PHP 4, and Python 1.6, “garbage collection” is a term which can be VERY under-appreciated if you don’t have a systems background. It’s easy th think “Cool, so it’s unsetting my variables to recover memory”, and by that infer GC simply avoids wasted resources…
The Rust notion that code you add at line 50, is failing because you made an innocent change at line 40, is an eye opener. It never was about recovering bytes. Learning Rust after Python is an eye opener in what Python’s really doing for you (and how you can make Python work a little bit faster)
This is a big deal for a lot of Python devs. Unfortunately it means you have to wean yourself off the Python method of “just write REPL code until you figure out what you actually want”. Rust’s turning out to be less expressive in that way, but I get it: code that “works” is not the definition of safe, or “done”.
I’ve worked professionally in a lot of C++ projects also recent ones. They all where suffering from problems that would be preventable now. The bigger the project, the more people, the longer it runs, the harder it is to keep it save in C++. This really is a benefit of of Rust. I know this may hurts some feelings. But objectively (and statistically) there are safer options now.
C++ is *unwilling* to become memory-safe. Look at `operator[]` on `std::vector`, or UB in `std::optional`. C++ could make them right now as safe as Rust’s equivalents, but it refuses to take the performance hit, and people refuse to rewrite the code that would be affected.
It’s all talk, and wishful thinking that there will be a magical compiler switch or a pragma that makes old code safe.
The core tenet of C++ is “only pay for what you use.” Of course they won’t take a performance hit for mandatory memory safety checking. That would defeat the purpose of the language.
C++ is not a beginner language. It’s powerful, it’s versatile, it has unbelievable range (from direct bit twiddling to templated collections of abstract classes). Such a tool needs deep knowledge, experience, and attention to detail to use correctly. A C++ project requires a high amount of invested effort, for a high level of potential performance and feature complexity. When corporations try to cut corners and push devs to ship features faster, errors start slipping through the cracks. But the problem isn’t that there aren’t any guard rails and speed bumps keeping those pesky devs in check. The problem is the suits trying to squeeze more out of them than what they’re rated for.
If we start enforcing the use of safer languages, devs will still find ways around the guard rails when the suits step on their necks for faster delivery. It’s pointless, maybe even actively detrimental.
You should give Rust a try! It actually becomes much harder and presents much more friction when you try to get around the safety, so I don’t believe that’s a real concern. It’s also dead easy to spot a dev trying to do this in code review.
C++ isn’t a beginner language, but it’s not a particularly advanced language either. A better analogy is that C++ is like a 50 year old table saw. It does the same thing as a modern table saw in terms of cutting wood, but lacks the sawstop, guard, and riving knife of the modern saw. “Well, just don’t stick your hand into the saw!” isn’t really sufficient.
Some languages make things that are trivial to mess up in C++ just unrepresentable. For example, pattern matching with destructing can make it trivial to express complex logic rules when, e.g., parsing something. This, in turn, means you can expose (and handle!) corner cases you may miss when you have necessarily more verbose code.
But “Memory saftey” sounds so much sexier and more important than “Input validation”. I guess that’s what you get when a politician asks a corparation who sends a marketing person who barely listened to an actual programmer.
Hey guess what your application did a use after free. How was it caused by input?
CERT is _not_ political.
Okay, now rewrite this entire article from the perspective of an assembly programmer who’s being told they need to embrace structured programming.
That’s the paradigm shift that we’re looking at in the modern programming landscape.
So, one that will be eagerly taken up?
no. and this is an incorrect attitude that i have run into again and again.
structured programming is a huge step forward. it’s essentially impossible to make good assembly code. factoring out common code into function calls, defining the types of variables, supporting recursion and lexical scoping and a few defined looping constructs that play well with these constructs. these are huge. these are fundamental. if they aren’t provided by the language then you will have to bend over backwards to get these things, and it will be very very bad. i have seen attempts to implement structured programming in macro assemblers and it never goes well. it always winds up both slow *and* unstructured, worst of all worlds.
structured programming is *essential* and structured programming *literally has no downside*. for anything larger than a single inner loop, it’s as efficient as raw asm.
but add-ons like object orientation and obscured pointers and dynamic typing and bounds-checked arrays are not in that family. you can get by without them. you can make pidgin versions of these facilities that are 99% as useful as the real ones. and they all come with absurd costs which are never successfully negotiated away.
there are a lot of interesting and promising developments in languages. but it’s a mixed bag.
i’m intrigued by the balance rust is trying to strike but at the same time appalled by the way the language is being developed, and the fact that the way it’s evangelized is in direct contradiction to the way it’s being developed. i was blown away when i used ocaml and found that i could not detect a performance penalty for its frankly amazing functionality. everything C++ is cursed to the bone. languages like python ruby and php seem to exist just to showcase that there was no point learning how to do anything well. i like java but jeeze there are some extremely bad corporate stylesheets out there dominating its real life practice.
so i am not going to say that C should be the end-all, or that the 1980 conceptualization of structured programming is the best. or that we shouldn’t move on a lot faster than we have been. but without equivocation or exception: none of the steps of the future are remotely as big as the past step from unstructured asm to structured programming. that step was singular and there will never be a step of that significance ever again.
Memory safety has always been an issue with c, and c++, primary because of the unconstrained implementation of strings and arrays. That problem is no red herring, in any language which permits the use and computation of address vectors. Since the 1970s I have worked with many languages, including assembler, cobol, pascal, pl/1, perl, python, rexx, c++, and c. The heart of the problem is hardware implementations which make address vector mischief easy. To fix this, variable type, extent, and vectoring must be constrained at or very near the hardware level. In any case only a few of the suggested solutions are not themselves written in c. This is problematic especially if a language contains a mechanism which permits excursions into the underlying c and machine code. On the other hand anything which increases the depth of the probability stack, making a successful intrusion more difficult is good.
Null termination of strings made it so much worse.
Hell, all in band signalling in text has been a disaster, should have been saved for wire protocols. Null termination, special characters in html/xml/shell/etc have all been an unending source of exploits.
Computers are inherently unsafe. Any highly sensitive information should be stored and transmitted physically.
Even supposing memory leaks are the main issue, it should set off alarm bells if managers frane security as a property of the tools developers use, rather than a quality of the work they do.
Granted, a rigorous methodology might call for formal memory-safety guarantees that (say) C can’t provide. But then you don’t need to be told not to use C. Anyone who does need to be told that, by definition, is probably not using a formal approach to safety, and that will still be an issue even if they choose to use Ada or Rust.
It’s like warning surgeons not to use KFC plastic sporks for cornea surgery. The warning’s not wrong, but it suggests a deeper problem if surgeons need to be told that.
Surgeons need to be told to mark where they are going to operate before doing anything permanent, and double check the patient’s identity and what procedure they’re in for before starting. It all seems obvious, but adding checklists and surgical location markings (and different markings on the foot you are not amputating, for example) cut surgical errors quite dramatically.
Memory safety is programming’s handwashing. Semmelweis proved that surgeons washing their hands between examining cadavers and delivering babies dramatically reduced maternal mortality. Surgeons ridiculed him for years, offended at the suggestion that their hands were unclean. He died before Pasteur later demonstrated germ theory.
heh i like the analogy but i think you applied it wrong :)
memory safety is programming’s banning the scalpel.
patients benefit from many past attempts to add procedural safeguards. but no one is taking away the scalpel. even though scalpels are fundamentally dangerous. just like memory dereference operators.
You … uh … don’t really know much about the history of the scalpel, do you? They were reused. Surgeons accepted handwashing before they realized they had to sterilize their tools, too. Now, basically all surgeons use either fully disposable scalpels with a permanent plastic handle or very simple handles with removable, disposable blades. A very few disciplines (mostly surgery on plastic tissues) use diamond blades which are removable and sterilized between uses.
Other disciplines use electrosurgery, which involves directed, high-energy RF applied to rapidly coagulate, dedicate, or vaporize tissue (the latter replacing the use of scalpels for disciplines which use it).
Others use endoscopic methods in which small incisions are made and tiny tools inserted through them to lessen the trauma in joint or thoracic surgeries.
Others use ultrasound (e.g, to break up kidney stones or gallstones) or various forms of high-energy radiation (e.g, radiotherapy for cancer) to avoid the need for cutting at all.
There has been over a century of focus on reducing the use of scalpels in the first place, and on reducing the risk posed when they must be used.
While I don’t disagree with many points you raise I do find some of it shortsighted. Arguing that languages have libraries/modules that can be used to mitigate some of the factors of memory safety while at the same time talking about lack of validation is like two sides of a coin. If a developer already fails to implement validation there is nothing stopping them from failing to implement a more memory safe technique, hence choosing a language which provides memory safety. Also most of these languages discusses are quite limited vs. many (not only rust) more recent options with have this built into the compiler not requiring anything from the developer.
Second, the quantity over quality has a clear benefit this article overlooks. When it comes to reducing the total amount of vulnerabilities sure this may not result in removing the highest risk, but it removes the most quantity. Given that lateral movement is often one of the most effective strategies allowing hackers to achieving their end goal, removing the low hanging, and most abundant, fruit can provide a very significant impact.
Finally, a LinkedIn study states that every 5 years the quantity of developers has doubled. If this maintains then in 5 years from now 1/2 of all developers will have less than 5 years of programming experience. New developers can’t just simply snap their fingers to gain the knowledge of engineers with 10-20 years of programming experience.
Choosing a memory safe language is a simple option that even a developer with one week of experience can choose. Knowledge of all options to mitigate a language which does not provide this clearly isn’t, and will take hundreds, of not thousands of hours before they reach that level.
i think you highlight one of the most potent limitations to extracting real value from intrinsic memory safety :(
“Choosing a memory safe language is a simple option that even a developer with one week of experience can choose.” unwrapping that, you’re regurgitating an OOP myth. rust won’t make a naif produce secure code any more than C++ made that same naif productive.
fundamentally if the factor we’re trying to account for is the fact that most code is written by people with no skill, experience, judgement, or oversight, there is no way to achieve security goals. it is impossible to achieve security goals in that context. those limitations can’t even be partially mitigated.
language and libraries and validation tools can help a lot in reducing mistakes made by competent programmers. but if the problem is incompetence, there is no mitigation. so like switching your future development from C to rust might be a gain, because it might improve the product of your competent programmers. but it cannot compensate for incompetence.
Not all bugs are created equal, memory safety is important because it so often leads to execution. For years trivially. Replacing a RCE with a DoS is a vast improvement.
The post-hoc no-true-scotsman disbelief that better tooling reduces errors in favor of paeans to competence are mind numbing.
Everyone fucks up eventually. If you can *mechanically* eliminate classes of errors and *choose* not to because of an aggrandized self assessment of your own skill; that’s actual incompetency.
Agreed, but not doing so when one is already incompetent leads to more issues because of the memory unsafe nature of the underlying language.
So I’ll restate once more since if you take the counter side of your point it proves what I’m saying. An incompetent developer with one week of experience can reduce the number of vulnerabilities they introduce by doing nothing more than choosing a memory safe language, and then as they get experience learn how to handle the rest intentionally. So I’d hardly say there is no solution to incompetence, it’s getting experience. But that again takes time and is not done on day 1 with a single action/choice. Simply choosing a language as a noob developer can reduce the number of vulnerabilities introduces, and by measurement I believe it’s been stated that 50% or more vulns tend to be memory safety related.
So your statements I feel only provide validation to my original points, even if you feel they don’t and incompetence cannot be compensated. I’d say your overlooking the basic principle that this would fix potentially 50% of the vulnerabilities that even incompetent developers would introduce…
“…Namely that it is an engineering discipline that requires planning, documenting, testing…”
Thank you, thank you. You’ve been a great crowd tonight. I’m here at the comedy club all week …
I’d like to hear more. Shouldn’t producing a quality codebase be an engineering pursuit?
Should be, but ‘software engineering’ is full of riff-raff, posers and morons.
No coding methodology can survive ‘professional HR’ in software. They can’t spot a ‘good one’ to save their lives, rather filter them out.
As soon as a startup hires HR, it starts to die.
Why not they ban assembly first?
Is this a joke?
why assembly is most easy to write safe code i think Im use assembly most time to write safety needed code pices
Klingon coding
copy con: file.exe
enter op-codes and data with Alt-keypad.
IIRC actually used by rubber duckies.
I find it hilarious that the current White House dares to release any critiques of “memory management.” You know, “the thing.”
CVE Quality is a read herring. Memory Safety is so critical because any Memory Safety issue can be leverage into exploiting the system be it in the security code or the debugging code for the about screen. Memory Safety is so critical because they break the metaphor of the structures we build and reveal that computers are just fancy calculators that do what we tell them to do, which is calculate. All the high level abstractions come tumbling down.
Ada isn’t the future. Ada is all but dead. The number of universities teaching Ada is actually very small. Yes there is a webpage for which schools teach Ada but if you look at the course offerings of those schools you will find it’s a lie. If nobody is teaching it, demand can’t be all that high. Also Unchecked Conversion. Oh and that rocket that exploded where the rocket engine debug code overwrote the memory of the guidance computer, that was all written in Ada.
As to C and C++, the problem isn’t that you can write unsafe code, it’s that it’s too easy to do so, the language should make it hard to write unsafe code, like C# or Rust (or Ada).
> Oh and that rocket that exploded..
Here we go, another person spouting uneducated stories. The reason that exploded was because the *management* tried to cut corners by using a package that was written for the incorrect, previous iteration of the rocket.
I want to trust you here Luke, the side of the story you’re telling sounds much more plausible (knowing how Ada works). Would you happen to have some references about that ?
From [Wikipedia](https://en.wikipedia.org/wiki/Ariane_5), so easily found:
> The software, written in Ada, was included in the Ariane 5 through the reuse of an entire Ariane 4 subsystem despite the fact that the particular software containing the bug, which was just a part of the subsystem, was not required by the Ariane 5 because it has a different preparation sequence than the Ariane 4.[
Here is the report from the accident investigation: https://web.archive.org/web/20000815230639/http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html. Or a brief Wikipedia summary: https://en.wikipedia.org/wiki/Ariane_5#Notable_launches
I have to disagree with you. Your argument, in my opinion, is flawed. First, memory corruption vulnerabilities are still used extensively. Yes, input validation issues are more common. Yes basic logic vulnerabilities are more common. However, you can’t expect a language to stop that. They are logic bugs. Languages can, however, prevent memory corruption bugs. Your argument essentially boils down to the idea that, when used properly, C and C++ are safe. This is true, obviously. The issue is that that doesn’t happen and it never will. Real projects are worked on by many developers with varying levels of experience over many years. You cannot keep memory bugs out of something like that without investing significant time in doing so, and even then you probably won’t. With that being the case, most people should just use a more modern language. And just so you don’t operate under a false assumption of my biases.. I only use C, C++, and python at work, and I absolutely love the languages. Doesn’t mean that the recommendations are wrong though.
I think that the focus on memory safety is a smokescreen. Read Chapter 3 of the whitehouse.gov document: My read is that it lays the predicate for bringing charges against CTO, CIO, CISO for selling (or buying) a product that has a vulnerability.
One of the most expensive attacks of all time is the WannaCry ransomeware attack which heavily relies upon a buffer overflow attack to spread: https://www.scademy.com/the-legacy-code-behind-wannacry-the-skeleton-in-the-closet/ . Thus the claim that “worst CVEs come from programmers not putting in basic checks for path traversal and forgetting to fully check user credentials” is not completely accurate, memory safety caused this.
This article is one long and elaborate cope. Regardless of whether C++ provides safer abstractions now than it used to, working in C++ means you’ll be working with C++ dependencies, and so the compounding effect of possible safety violations is significant.
Sure, you could shave every morning with a katana, but maybe stop and think for a second about whether you should.
This feels contradictory. You lambaste them for criticizing C’s and C++’s pitfalls, then (with justification) sing the praises of Ada’s comparative safety? “Yet once you’re over those first hurdles, and the program just runs without glitches or oddities like you’re used to with C++ code, that’s an almost magical feeling.” That’s what they’re talking about — or rather, the subset of glitches and oddities that don’t show up as long as there’s benign input.
Ok, path traversal goofs and the like won’t be solved by this, and maybe there’s a point about weighting by how much things are actively exploited, but calculating with such a weighting would add a layer of complication that would bog things down in further debates about metrics, and at any rate, the basic point stands —the memory safety issues compose a very large proportion of the vulnerabilities discovered, and shifting, where possible, toward safer languages could still yield significant benefits.
Thanks to the author for posting a logical response to those government twits who, as usual, do little but barf up nonsense and cause unnecessary drama.
However, I have to say, that during my years working for DOD contractors, C and C++ are EVERYWHERE, and we are talking 30+ years of lava and spaghetti that has NEVER been refactored, because refractors aren’t included in low ball contracts. 😉
C++ is all over MedTech and Marine Biology in the form of Qt embedded, as well.
Just a few trillion lines of code filled with cobwebs, lint, and lost socks.
I suppose they will just use AI to fix it, since AI will be sitting in the programmer’s chair anyway. I fully expect our lovely corporate overlords go ahead and layoff 500,000 people between 2023 and 2024. They have already reached 310,000 by my count.
> “containers that replace the nightmarish and error-prone C-style strings and arrays”
Sounds more like you’re just pissed C++ was lumped in with C in the report.
Surely it can’t be because Microsoft C# and Google Go are on the list? I’m sure they had nothing to do with it.
Ada Spark is brilliant combatting memory safety and input validation beautifully. They should be singing it’s praises. It has also proven to be more cost effective than C, C++ or Java and I expect Rust too.
Your use of unsecure rather than insecure was noticed and appreciated.
This article is clickbait unbecoming of HaD. Look at the amount of pushback!
Not only is the WH article not incorrect, but it is at LEAST 15 years late to the party, coming well after similar reports by security foundations, Microsoft and Google. Honestly the criticism should be what took the WH so long.
the author takes a topic with solid industry consensus, and mischaracterizes the origin as misguided government.
The red herring is the HaD article.
Of course it is. All written from a perspective of an ivory tower of all-knowing guru who is “gradually porting personal C++ projects to Ada” and therefore knows better than 90% of the industry dealing with massive codebases for decades.
Arguing that memory safety issues are not important because input validation is often a root cause of more severe vulnerabilities is like saying we should not care about developing antibiotics against infectious diseases because heart attacks have much more severe consequences. One does not rule out another.
The push for memory safe languages is meant to eliminate the whole class of vulnerabilities in its source – you won’t have memory safety issues at all if you program in a memory safe language. Boom, the whole class of vulnerabilities is gone. How realistic this desire is – it’s a separate discussion.
Insisting that C or C++ can be made safe is a case of quantifier confusion – we don’t care that *there exist* ways of writing code that make it secure. What we want is that *all* ways of writing code in a specific language are secure.
In large scale programming (yes, in those despised corporations that care about how much time a dev spends staring at the screen trying to understand the code or how much server resources is needed to run valgrind when the code base is 100M lines of code) relying on best case scenarios of smart engineers doing their best is not a viable tactic – a solution is something that scales to thousands of projects and prevents human factor mistakes with 100% accuracy.
“Many programming languages were said to be ‘C killers’, yet C is still alive today” ≠ C is still around by its own merits.
“White House urges developers to dump C and C++.Biden administration calls for developers to embrace memory-safe programing languages and move away from those that cause buffer overflows and other memory access vulnerabilities.”
Above InfoWorld pub to make impact on future of c/c|++ 1 buggy 2 malware vulnerable 3 ~unmaintainable software technologies?
And LinuxS?
sudo apt update
sudo atpt upgrade
Albert Gore willing, of course.