The government of Argentina has a national ID card system, and as a result maintains a database containing data on every citizen in the country. What could possibly go wrong? Predictably, an attacker has managed to gain access to the database, and is offering the entire dataset for sale. The Argentinian government has claimed that this wasn’t a mass breach, and only a handful of credentials were accessed. This seems to be incorrect, as the seller was able to provide the details of an arbitrary citizen to the journalists investigating the story.
Patch Tuesday
Microsoft has released their monthly round of patches for October, and there are a couple doozies. CVE-2021-40486 is an RCE in Microsoft Word, and this flaw can trigger via the preview pane. CVE-2021-38672 and CVE-2021-40461 are both RCE vulnerabilities in Hyper-V. And finally, CVE-2021-40449 is a privilege upgrade actively being used in the wild, more on that in a moment. Oh, and you thought the Print Nightmare was over? CVE-2021-36970 is yet another print spooler vulnerability. The unfortunate thing about the list of Microsoft vulnerabilities is that there is hardly any information available about them.
On the other hand, Apple just patched CVE-2021-30883, a 0-day that’s being actively exploited in iOS. With the release of the fix, [Saar Amar] has put together a very nice explanation of the bug with PoC. It’s a simple integer overflow when allocating a buffer, leading to an arbitrary memory write. This one is particularly nasty, because it’s not gated behind any permissions, and can be triggered from within app sandboxes. It’s being used in the wild already, so go update your iOS devices now.
MysterySnail
Kaspersky brings us a report on a CVE-2021-40449 being used in the wild. It’s part of an attack they’re calling MysterySnail, and seems to originate from IronHusky out of China. The vulnerability is a use-after-free, and is triggered by making a the ResetDC API call that calls its own callback. This layer of recursive execution results in an object being freed before the outer execution has finished with it.
Since the object can now be re-allocated and controlled by the attacker code, the malformed object allows the attacker to run their code in kernel space, achieving privilege escalation. This campaign then does some data gathering and installs a Remote Access Trojan. Several Indicators of Compromise are listed as part of the write-up.
Off to the Races
Google’s Project Zero is back with a clever Linux Kernel hack, an escalation of privilege triggered by a race condition in the pseudoterminal device. Usually abbreviated PTY, this kernel device can be connected to userspace applications on both ends, making for some interesting interactions. Each end has a struct that reflects the status of the connection. The problem is that TIOCSPGRP
, used to set the process group that should be associated with the terminal, doesn’t properly lock the terminal’s internal state.
As a result, calling this function on both sides at the same time is a race condition, where the reference count can be corrupted. Once the reference count is untrustworthy, the whole object can be freed, with a dangling pointer left in the kernel. From there, it’s a typical use-after-free bug. The post has some useful thoughts about hardening a system against this style of attack, and the bug was fixed December 2020.
AI vs Pseudorandom Numbers
[Mostafa Hassan] of the NCC Group is doing some particularly fascinating research, using machine learning to test pseudorandom number generators. In the first installment, he managed to break the very simple xorshift128 algorithm. Part two tackles the Mersenne Twister, which also falls to the neural network. Do note that neither of these are considered cryptographic number generators, so it isn’t too surprising that a ML model can determine their internal state. What will be most interesting is the post to come, when he tackles other algorithms thought to be secure. Watch for that one in a future article.
L0phtcrack Becomes Open Source
In a surprise to me, the L0phtcrack tool has been released as open source. L0phtcrack is the password cracking/auditing tool created by [Mudge] and company at L0pht Heavy Industries, about a billion years ago. Ownership passed to @stake, which was purchased by Symantec in 2004. Due to export regulations, Symantec stopped selling the program, and it was reacquired by the original L0pht team.
In April 2020, Terahash announced that they had purchased rights to the program, and began selling and supporting it as a part of their offerings. Terahash primarily builds GPU based cracking hardware, and has been hit exceptionally hard by the chip shortage. As a result of Terahash entering bankruptcy protection, the L0phtcrack ownership has reverted back to L0pht, and version 7.2.0 has been released as Open Source.
People keep saying “if we were more careful in our C and C++ coding, we would not have these issues” but somehow that never happens, and we still keep getting the same security issues over and over again for decades now. Will the C and C++ folks ever learn better, or are we destined to keep repeating this forever?
It’s (at least until we come up with a hive-mind) impossible for everyone to learn from the mistakes of the few, so yes, we are destined to keep repeating the same mistakes as long as the language itself allows for that. As an example of a pretty popular language that makes it much harder to do many of the mistakes C/C++ allows one could bring up e.g. Rust.
We need more than rust, we need languages with built in associative arrays so we can do stuff like userParams[“joe”] without that extra goop that causes so many problems. We need automatic persistence and serialization of variables because there are too many bugs associated with state preservation. We need languages to do encryption and decryption directly because there are too many bugs associated with buffer handling and encryption. I’m just getting started here, look at your own programs and their sore spots and imagine how a better language could help.
“userParams[“joe”] without that extra goop”
– What extra goop?
“automatic persistence and serialization of variables”
– We’ve all seen how badly garbage collection manages memory at times, it leads to other issues.
Languages to do “encryption and decryption directly because there are too many bugs associated with buffer handling and encryption.”
– I don’t think it’s a good idea to bake this in as it would require major language version updates each time a protocol or encryption standard changes
Everyone can complain, but to effect change requires a bit more than that
Garbage collection always seemed like a “language smell” to me, having grown up with Turbo Pascal and Delphi where the programmer is always responsible for wiping up after himself as far as memory usage is concerned. I’m still surprised at how unpopular Delphi / Free Pascal is even after everyone has started smelling the garbage, when there a simple alternative, and why M$ still decided to go with the smelly option, after [poaching Anders Heijlberg](https://microsoft.fandom.com/wiki/Anders_Hejlsberg), the architect of both Delphi and one of the main C# architechts.
Garbage collection always seemed like a “language smell” to me after growing up on Turbo Pascal and Delphi, where the programmer has to wipe up after his own memory allocation.
I’m surprised by how unpopular Delphi & Free Pascal are even though everyone has started to smell the uncollected garbage now, when there is a simple alternative, and I can’t understand why M$ would still go for the smelly option for their C# language after poaching Anders Hejlsberg from Borland, who was the architect of both Delphi and then C#.
“if we were more careful in our $PROGRAMMING_LANGUAGE coding, we would not have these issues”
But for which valves of $PROGRAMMING_LANGUAGE is that actually happening with?
Not defending C/C++, but I don’t really believe the programming language is the main problem.
Both bugs mentioned here are use after free bugs, usually only seen in C and C++ programs, they happen because C and C++ force you to manage your own storage. Other languages manage their own storage and are immune to these programming errors.
The real issue with these languages is pointers. The flaw is that array indexing and pointer arithmetic are considered to be the same thing. Thus d[-1] is legal code to execute despite being an obvious bug. This can’t be fixed because it is the fundamental defining feature of C, the whole language depends on it.
does not someone have to write into the code how that storage would be handled. Automation is just another persons code at the current time. When decisions are made for you and complicity is your comfort you should be the user not the maker. I dont know thought its all spaghetti and meatballs to me. ;)
“does not someone have to write into the code how that storage would be handled”
Yes and there will be one implementation of this algorithm instead of thousands. It will have a standard API and an extensive unit test suite. It will get far, far more testing and scrutiny than any of the thousands of user implementations.
We are all terrible programmers, don’t pretend you are better.
:) Lol never said I was better than anyone, Poster identified as X. I’m going to make the assumption here that you mistook the you and your in my previous post as being directed to you as an individual rather than as place holder not assigned to a specific individual . Although I will admit i didn’t take into consideration that anyone reading it would assign their self identity to the place holders {you and your}. linguistics isn’t my forte. Have a fun day
But which language are you saying is going to free people from the need to be “more careful”?
In the end it would just seem you are trading paradigm and/or implementations flaws for different paradigm and/or implementation flaws and a whole set of different (and potentially more obscure) programming errors.
Honestly, if you see syntax idiosyncrasies such as “d[-1] is legal code” as the problem, then you are looking at the wrong problem – and I don’t think arbitrarily changing programming language will fixed it.
Proof is in the pudding. Check out the CVE database and see where humans fail to code correctly. You should also read Mozilla’s analysis of the Firefox code that led to the development of rust. We make the exact same mistakes over and over again because the language lets us do it.
You can’t have pointer bugs in a language that doesn’t have pointers. You don’t have use after free bugs in languages that manage their own storage. It really is that simple.
Dont put C and C++ in the same category. Yes you can write C++ code just as you were using C but that is due to lack of modern C++ knowledge.
It is perfeclty feasible to eliminate manual memory allocation/deallocation and many more pointer related issues.
Another issue with C is the POSIX interface. One of these bugs is in ioctl, a horrible mess that should have gone away a long time ago. It is a totally generic function interface and its impossible to sanitize or validate the input parameters. It will be a source of bugs forever until it is deprecated and removed.
Many cases the language isn’t helpful – many ever repeating failures of the programmer are protected against by languages like RUST… Its why they exist!
With those protections in place poor code, and insecure code can of course still be written, but the failures that come up time and again won’t be among them – assuming the compiler is correct – and with so many eyes on that being the foundation of everything it shouldn’t have soo many flaws or have them last too long…
It’s been shown repeatedly that even the most expert of C and C++ developers fail to see blatant and obvious security flaws right in from of their eyes so it’s not clear that more eyes will help.
True and not true…. Code reviews are still worth it. Back when I was supervising a group of programmers, we found many a flaw that way. Humans aren’t perfect so you never catch them all…. But some is better than none! Of course we were not ever looking for ‘security’ flaws, just looking for ‘correctness’. Who cared about security! :rolleyes: .
You need testing that is reliable, humans are not. Humans will skip running the last test on Friday afternoon, they are on the phone when the weird error message scrolls off the screen. Humans make typos all the time and they miss typos all the time. Projects have maintenance and bug fix releases all the time, massive testing is required, humans are not cut out for that kind of boring repetitive work.
The only effective testing is automated testing.
When you have to backport your security fix to six old versions, to cover all the LTS releases, you will think twice about using humans for testing.
“The only effective testing is automated testing.” says a lot about the problem. The issue isn’t the language, though it doesn’t help it. Direct memory access is a defining feature of C and C++, to change this is a backwards incompatible move. It’s almost like enforcing typing on an untyped language.
Automatic testing is and should be a process in every company that cares about security. Even languages that have garbage collections / indirect memory access should do this, because they are NOT immune to bugs outside of invalid pointer access.
Thanks for answering my question, rather than just providing more opinions on why C/C++ isn’t suitable.
Rust would be also be an alternative I would consider, but you’re still trading problems for other problems. While Rust might mean you can be “less careful” with memory, it’s fairly complex language and you’ll find you need to be “more careful” in other places (and Rust is newer, getting experienced programmers will be harder). Also Rust has ‘unsafe’, which will allow the programmer to mitigate the safety gains from the memory restrictions. (The OP is complaining about language features, Rust also has unsafe[1] language features…)
[1] I like that Rust uses ‘unsafe’ as the keyword to enable unsafe behavior, this does feel like an advancement.
Yes you do need to be more careful with rust, but your errors will be caught by the compiler, and that is the whole point. You will grind your teeth and curse at the compiler but it will only let you write good code.
As it stands there isn’t any programming system capable of more complex programs that isn’t trivial to create bad programs with – and I don’t see how there ever can be – if everything in the program logic flow, variables etc is so automated and ‘safe’ then you can’t actually do anything new with it. Rust and the other languages like it are a great step in the right directions though – taking away from the programmer the elements humans are really really terrible at..
I use a calculator or bit of paper (or more often the side of the object I’m working on as paper) to do basic arithmetic I can do in my head whenever it even remotely matters – because no matter how good you are at it its soo easy to miss a carry, or forget to add the other fraction etc… Same thing with programming languages – the best ones are the ones that make those errors harder to make, or easier to spot – so to some extent the scratch like very visual logic flow programming systems are a good thing – lays the whole program out in an easy way to see all the links between loops and logic – but those are generally quite limited in how complex you can really make a program.
So really the thing to ask is not necessarily what language (though as above a language that deals with the bits humans are bad at for them is a good choice), but how much design work do you do before programming anything at all – if you spend lots of man hours with the flowcharts of logic looking at them with both security and function in mind you wouldn’t forget to erase the secret that is now used etc, and you would naturally break the program up into small chunk that do just the one thing and do it properly – only way you can really make something that complex and really comprehend what all the chunk you are looking do properly…
People say that C++ fixes a lot of the problems, but the kernel API is C, not C++, so your C++ code is still dealing with pointers when it has to call into the kernel.
I’m currently reading through some pretty optimal C code that was written for a NXP LPC4370 MCU (32-bit ARM Cortex-M4, with 2 ARM Cortex-M0 co-processors, 282KiB of RAM. 128KiB of which has been allocated as two 64KiB buffers for USB 2 High Speed transfers and another 64KiB for DMA of ADC samples). Primarily C but also some handcrafted SIMD assembly code was used for a timing critical function. Using C with a few dozen lines optimal assembly code allowed the chip to be underclocked down to 120MHz (which doubles the lifetime of the product by running 10°C/18°F colder, reduces EMI noise generated and lowers the power required for operation). One M0 core takes care of USB transfers, the other M0 core is mostly idle after setting up DMA for the ADC samples and the M4 core if required carries out bitwise manipulations (SIMD) on the samples from the 12-bit ADC for more efficient use of USB throughput.
On paper projects like the above could be created using RUST, but it adds time to the development cycle getting the compile environment into a functioning state to support the specific hardware device, additional vendor support API’s are probably written in C (or C++), and you could spend time migrating the vendor code to RUST, or just roll with what you have and use the API (And totally ASSume that the hardware manufacturer has written secure bug free source code). Your options are to go with assembly, C (or C++) and save time now and get your product to market fast (fixing all problems later, with updates that non-technical people will probably never install unless forced), or invest additional time now migrating everything supplied into RUST and delay the time to market possibly making your product obsolete before it is even available.
I’m not saying anything bad about RUST (I think that it is great), but what I am saying is that ASM/C/C++ is still the default just about everywhere.
I’m rather surprised you still put Assembly in the list, while it does have its own plus points its just a drag to use to the point it seems to be a much more rapidly dying breed of programmer…
Which for me is also why C and C++ are defaults – folks hitting retirement and folks who just barely started work all would have been taught them because its been the best option for quite some time, not till pretty recently python for its easy nature, and Rust etc for its security have really taken off enough to be the most likely to be taught and used (so its still a little hard to find Rust masters), and most of the other more common languages tend to be a little too specialist in some way or just another C like, so nobody really notices from the point of view of programming with it its not…
I think you should have mentioned that MysterySnail is a Windows vulnerability.
Microsoft still don’t want to escape nineties
A big problem with Linux is that none of its internal APIs are standardized, so it is difficult if not impossible to create unit test suites for it. Testing consists of throwing pre-releases out to the masses for integration testing. Blatant and obvious bugs pass right through this process because none of this testing is mandatory and nobody checks to see if all the tests are actually run.
In contrast, Oracle and IBM have standardized their kernel interfaces, they have extensive unit test suites, there are fewer edge case bugs, and there is far less cringing and hand waving when upgrades happen.
I think the issue here is integration testing vs unit testing.
A unit test should just test one function. Standardization should not matter because each unit test is tailored to each function.
Integration testing is where you’re testing a function that calls others. The data type and arguments passed between them matter in this case.
I totally agree that testing needs to be automated, and that pull requests that don’t include tests or fail the automatic test runner should not be merged until the test is fixed or the code is fixed. This is something that needs to be applied from the top down (management / project enforced)
That being said, I don’t know enough about linux development specific domain knowledge to know how this applies, but this is foundational testing strategy that I’ve learned.
Do painters say “we don’t need masking tape and tarps because we never spill a drop”?? No, they freely admit their human frailty and they use lots of masking tape and they cover stuff with tarps. Programmers should have the same attitude.
Raising programming from a craft to the level of engineering.
One of the reasons why fortran is still in use, is compiler optimization. Fortran is a very restricted language with few fancy features so the compiler can make extensive assumptions about the runtime environment and create huge optimizations. This is why numeric algorithms are often written in fortran. It’s also much easier to prove correctness when you can reduce complexity, very important for numerics. fortran doesn’t give you much wiggle room for tricky coding so you just don’t see the horrible bugs that you get all the time in c. Admittedly I wouldn’t want to code up a web app in fortran but it is a good example of how a different language can score big benefits.
Let’s suppose you want to iterate over an array. In C there are about a hundred different ways to code up a loop, all valid, some with weird behavior, some fast, some slow. How do you pick the right one? Maybe one kind of loop works better on arm and another on x86_64. So now we’ve got ifdefs and multiple implementations and it’s a big mess.
Why should it matter? Why should you care? What you really want is a foreach operator and let the compiler figure out the best way to loop. But you can’t have a foreach operator in C because pointers don’t know how many elements they are pointing at.
L0pht! Didn’t know they were still around. I had their “Whacked Mac Archives” back in the day…
Nope. The best painters transitions freehand, no silly (and ultimately useless) masking tape.
Argentine law enforcement has identified the authorized user account used to download the database from the internal closed VPN, no hacks nor security breach involved (at least not against the main database, maybe against that user’s computer).
I am more surprised there are countries without a centralized database of all their citizens. But I come from the Netherlands, which population registry ultimately stems from the Napoleontic age, and the registry comes with enormous ease for law-abiding citizens and government alike, although we have had a few bad experiences with it (WWII, when jews were easily found and the illegal resistance could not get food stamps – which led to widespread attacks on population registries)
“L0phtCrack is Now Open Source”
Any Linux port?
This seems to be for Windows only, and you need non-free compilers.
so, possibly a stupid question but:
When bugs or exploits allow a buffer overflow, you can write to adjacent memory spaces. But how does that help you? how would you know what is stored in those spaces? aren’t you just likely to end up with garbage variables and then the program will crash?
I’m unsure how a hacker can use this exploit.
The goal is to overwrite the stack’s return address for the function the variable is in. Since every function call writes its own additional memory pointers and variables to the stack and then eventually calls return you are only messing up that function’s variables. The actual program executed is elsewhere in memory. Most of the time you will still be able to get the function to return even if you put garbage in its local variables. Since you overwrote the return pointer you control what code gets executed next. In the most basic sense you can point at the memory address of the stack immediately after where the return address was and put your code there, but there are various mitigations that can foil this. In that case you have other options though that just require more work to exploit. In the end you control what the program does next though one way or another.
thanks for the reply.