Crash Your Code – Lessons Learned From Debugging Things That Should Never Happen™

Let’s be honest, no one likes to see their program crash. It’s a clear sign that something is wrong with our code, and that’s a truth we don’t like to see. We try our best to avoid such a situation, and we’ve seen how compiler warnings and other static code analysis tools can help us to detect and prevent possible flaws in our code, which could otherwise lead to its demise. But what if I told you that crashing your program is actually a great way to improve its overall quality? Now, this obviously sounds a bit counterintuitive, after all we are talking about preventing our code from misbehaving, so why would we want to purposely break it?

Wandering around in an environment of ones and zeroes makes it easy to forget that reality is usually a lot less black and white. Yes, a program crash is bad — it hurts the ego, makes us look bad, and most of all, it is simply annoying. But is it really the worst that could happen? What if, say, some bad pointer handling doesn’t cause an instant segmentation fault, but instead happily introduces some garbage data to the system, widely opening the gates to virtually any outcome imaginable, from minor glitches to severe security vulnerabilities. Is this really a better option? And it doesn’t have to be pointers, or anything of C’s shortcomings in particular, we can end up with invalid data and unforeseen scenarios in virtually any language.

It doesn’t matter how often we hear that every piece of software is too complex to ever fully understand it, or how everything that can go wrong will go wrong. We are fully aware of all the wisdom and cliches, and completely ignore them or weasel our way out of it every time we put a /* this should never happen */ comment in our code.

So today, we are going to look into our options to deal with such unanticipated situations, how we can utilize a deliberate crash to improve our code in the future, and why the average error message is mostly useless.

When Things Go Wrong

Let’s stick with a scenario where we end up with unexpected garbage data. How we got in such a situation could have many reasons: bad pointer handling, uninitialized variables, accessing memory outside defined boundaries, or a bad cleanup routine for outdated data — to name a few. How such a scenario ends, depends of course on the checks we perform, but more importantly, exactly what data we’re dealing with.

In some cases the consequences will be fairly obvious and instant, and we can look into it right away, but in the worst case, the garbage makes enough sense to remain undetected at first. Maybe we are working with valid but outdated data, or the data happens to be all zeroes and a NULL check in the right spot averts the disaster. We might even get away with it altogether. Well, that is, until the code runs in a whole different environment for the first time.

Everything is easier with an example, so let’s pretend we collect some generic data that consists of a time stamp and a value between 0 and 100 inclusive. Whenever the data’s time stamp is newer than the previous one, we shall do something with the value.

struct data {
    // data timestamp in seconds since epoch
    time_t timestamp;
    // new data value in range [0, 100]
    uint8_t value;
};  

void do_something(struct data *data) {
    // make sure data isn't NULL
    if (data != NULL) {
        // make sure data is newer than the previous
        if (data->timestamp > last_timestamp) {
            // make sure value is in valid range
            if (data->value <= 100) {
                // do something with the value
                ...
            } else {
                // this should never happen [TM]
            }
            // update timestamp
            last_timestamp = data->timestamp;
        }
    }
}

This seems like a reasonable implementation: no accidental NULL dereferencing, and the logic matches the description. That should cover all the bases — and it probably does, until we end up with a pointer that leads to a bogus time stamp thousands of years from now, causing all further value processing to be skipped until then.

Often times, a problem like this gets fixed by adjusting the validation check. In our example, we could include the current time and make sure that time differences are within a certain period, and we should be fine. Until we end up in a situation where the time stamp is fine, but the value isn’t. Maybe we see a lot of outliers, so we add extra logic to filter them out, or smoothen them with some averaging algorithm.

As a result, the seemingly trivial task of checking that the data is newer and within a defined range exploded in overall complexity, potentially leading to more corner cases we haven’t thought about and we need to deal with at a later point. Not to mention that we ignore the simple fact that we are dealing with data that shouldn’t be there in the first place. We’re essentially treating the symptoms and not the cause.

Crash Where Crashing Is Due

The thing is, by the time we can tell that our data isn’t as expected, it’s already too late. By working around the symptoms, we’re not only introducing unnecessary complexity (which we most likely have to drag along to every other place the data is passed on to), but are also covering up the real problem hiding underneath. That hidden problem won’t disappear by ignoring it, and sooner or later it will cause real consequences that force us to debug it for good. Except, by that time, we may have obscured its path so well that it takes a lot more effort to work our way back to the origin of the problem.

Worst case, we never get there, and instead, we keep on implementing workaround after workaround, spinning in circles, with the next bug just waiting to happen. We tiptoe around the issue for the sake of keeping the program running, and ignore how futile that is as a long-term solution. We might as well give up and abort right here and now — and I say, you should do exactly that.

Sure, crashing our program is no long-term solution either, but it also isn’t meant to be one. It is meant as indicator that we ended up in a situation that we didn’t anticipate, and our code is therefore not prepared to properly handle it. What led us there, and whether we are dealing with an actual bug or simply flawed logic in our implementation is a different story, and for us to find out.

Obviously, the crash itself won’t solve the problem, but it will give us a concrete starting point to look into what’s hidden underneath. We probably would have ended up in that same spot if we worked our way back from a crash happening somewhere a couple of workarounds later, but our deliberate crash early on lets us skip that and gives us a head start. In other words, spending a few minutes on a minor nuisance like implementing a proper check can save us hours of frustrating debugging down the road.

So let’s crash our code! A common way to do that is using assert() where we give an expected condition, and if that condition is ever false, the assert() call will cause the program to abort. Let’s go the extreme way and replace all conditions in our example with assertions.

void do_something(struct data *data) {
    // make sure data is not NULL
    assert(data != NULL);

    // make sure timestamp is valid and update it
	assert(validate_timestamp(data->timestamp));
	last_timestamp = timestamp;

    // make sure the value is in valid range
	assert(data->value <= 100);

	// do something with the value as before
    ...
}

Now, at the first sign of invalid data, the corresponding assertion will fail, and the program execution is aborted:

$ ./foo
foo: foo.c:64: do_something: Assertion `data->value <= 100' failed.
Aborted (core dumped)
$

Great, we have the crash we are looking for. There are only two problems with assertions.

Assertions Are Optional

By design, assertions are meant as a debugging tool during development, and while the libc documentation advises against it, it is common practice to disable them for a release build. But what if we don’t catch a problem during development, and it shows up in the wild one day? And chances are, that’s exactly what’s going to happen. Without the assertion code, neither the check that would prevent the problem is performed, nor would we get any information about it.

Okay, we are talking about purposely crashing our code here, so we could just make it a habit to always leave the assertions enabled, regardless of debug or release build. But that leaves still one other problem.

Assertion Messages Are Useless

If we take a look at the output from a failed assertion, we will know which assert() call exactly failed: the one that made sure the value is in valid range. So we also know that we are dealing with an invalid value. What we don’t know is the actual value that failed the assertion.

Sure, if we happen to get a core dump, and the executable contains debug information, we can use gdb to find out more about that. But unfortunately, we don’t often have that luxury outside of our own development environment, and we have to work with error logs and other debug output instead. In that case, we are left with output of very little value.

Don’t get me wrong, knowing where exactly something went wrong is definitely more helpful than no hint at all, and assertions offer great value for little effort here. And that’s the problem: if the alternative is no output at all, then yes, knowing in which line our problem occurred seems like a big win we could settle for. Considering the popularity error handling usually enjoys among programmers, it’s easy to see why we would be happy enough with that — but honestly, we shouldn’t be. Plus, it promotes bad habits for writing error messages ourselves.

Crashing Better

If you ever find yourself in a situation where you have a myriad of reports of the exact same issue, and you are lucky enough to have an error log available for each individual incident, you will learn how frustratingly helpless it feels to know a certain condition failed, but to have zero information what exactly made it fail. It is when you realize and learn the hard way how useless, and almost counterproductive, error messages in the form of “expected situation is not true, period” without any further details really are.

Consider the following two error messages:

  1. Assertion `data->value <= 100' failed
  2. data->value is 255, expected <= 100

In the first case, all we know is that the value is larger than 100. Since we’re dealing with an 8-bit integer, it leaves us 155 possible options, and we might have to mentally go through every single one of them in order to understand what could have gone wrong, jumping from one uncertain assumption to the next, trying to find out what value could have caused all this.

However, in the second case, we can skip all that. We already know what value caused the error, shifting our debugging mindset from a generic “why did we get an invalid value?” to a concrete “how could we have ended up with 255 here?”. This gives us another head start in finding the real problem underneath.

So instead of sticking with assertions and their limited information, let’s implement our own crash function, and make it output whatever we want it to. A simple implementation using a variable argument list could look something like this:

#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>

void crash(char *format, ...) {
    va_list args;

    va_start(args, format);
    vfprintf(stderr, format, args);
    va_end(args);

    exit(EXIT_FAILURE);
}

This way we can format our error messages the same way we do with printf(), and we can add all the information we want:

if (data->value <= 100) {
    // validation passed, handle the data
    ...
} else {
    crash("data->value is %d, expected <= 100\n", data->value);
}

Note that unlike the assertion’s output, we don’t get the information on the exact location here, but that’s just for simplicity. I’ve put a more elaborate crash() function outputting more details, including printing the function back trace, on GitHub in case you’re curious.

On a quick side note regarding C, Google has developed a bunch on sanitizer tools that are nowadays integrated in gcc and clang that are worth looking into.

No Such Thing As “Too Much Information”

Keep in mind, we are focusing on problems we didn’t anticipate. Some “this should never happen” case that magically did happen. Admitting that it actually could happen, and therefore adding the proper checks for it, is a first important step. But how do we know what to put in our error message? We don’t know yet what we actually need to know, or what could have gone wrong — if we did, we wouldn’t consider it a never-to-happen scenario, but we’d try to prevent it from the beginning.

The simple answer: all of it.

Well, obviously not all, but every detail that could be in the slightest way relevant and related to the situation is likely worth to include in the error message. Once we add validation checks, we have all that information available anyway, so why not use it?

Take the time stamp in our data collection example: just because it was successfully validated doesn’t mean we should forget about it. It might still offer valuable debug information for a failed value validation. Who knows, maybe it reveals an issue at every full hour, or every day at 6:12:16 PM, or shows no pattern whatsoever. Either way, chances are it will help us narrowing down the debug path, and take us yet another step closer to the actual problem.

And even if it doesn’t, and the extra information turns out to be completely irrelevant, we can always filter it out or ignore it. However, what we can’t do is add it after a crash. So don’t be shy about adding as much information as possible to your error messages.

Choose Your Battles

Of course, not every unexpected situation or invalid data scenario necessarily calls for a crash. You probably wouldn’t want to abort the whole program when validating random user input, or if the remote server you request data from is unreachable, or pretty much any case that deals with data out of your direct control. But on the other hand, those situations aren’t fully unexpected either, so having a default fallback workaround in place, or outputting an error without the crash, is a valid way to deal with that.

Nevertheless, making it a habit to provide meaningful information with as much details as possible can help everyone involved to understand the problem better. To give a few examples:

  • Parsing input failed vs
    Invalid parameter abc for command foo
  • Error loading data vs
    Connection timeout while requesting data from server xyz
  • Assertion `data->value <= 100' failed vs
    [1547420380] data->value (0x564d379681fc) was 234

As programmers, we grow up being indoctrinated on the importance of error handling, but in our early years, we rarely learn how to properly utilize it, and we might fail to see any actual benefit or even use for it at all. As time goes by, error handling (among other things like code documentation and testing) often becomes this annoyance that we just have to deal with in order to keep others happy: teachers, supervisors, or that one extra-pedantic teammate. We essentially end up doing it “for them”, and we easily overlook that we ourselves are actually the ones who can benefit the most from it.

Unfortunately, no story-telling can substitute learning that the hard way, but hopefully I could still give some food for thought and new perspective on the subject.

In that sense: Happy Crashing!

(Banner image from the long-lost Crash Bansai gallery of deformed toy automobiles for morbid miniature gardening.)

58 thoughts on “Crash Your Code – Lessons Learned From Debugging Things That Should Never Happen™

  1. I remember back in the day…the original IBM PC 5150 had parity-checked RAM, and if a parity error occurred it cleared the screen, printed “PARITY ERROR” and hung. This used to happen occasionally and drove me mad because of the thoughts that a) instead of losing one single byte I lost the whole bloody lot, and b) one in 9 times the problem will have been caused by the parity check RAM itself!

    Well ok over simplistic but the lesson is: once an error has been detected, try and allow graceful recovery and try and allow user data recovery.

    1. the parity is for professionals who would rather have a crash than an error in their data. Imagine a bit flipping in your lotus123 spreadsheet of a bookkeeping? Sometimes a crash is better than users ignoring that their data might be corrupted when They have no means to find where that bit has fallen over.

      1. Therefore, there is no point to any of this nonsense because there is really no point in testing anything but a release build. Any and all attempts to add debugging code or assertions are doomed to failure. QED

    1. Leave the assertions in! Then you never hit the race condition. Problem solved. :)

      But seriously, I know what you mean: I’m one of the guiltiest printf debuggers in the world. Removing the serial overhead has made a difference more than once in my life.

      As has the difference in speed between code running out of RAM and flash. “Alright, this code’s debugged and tested. Time to flash it in permanently.” Sigh.

  2. Good article! I’m a bit terrified that people actually need to be told to put asserts in their code, but it feels like these days, what’s old is new again.

    Here in game development, at least at the 5 or so studios I’ve worked at, peppering your code with asserts is a way of life. Code should fail fast and as close to the source of the error condition as possible in order to make debugging easier. Trying to make your code limp along while bad data is thrown at it is a good way to make it all but impossible to sort out the actual origin of the problem you’re trying to debug.

  3. A software engineer walks into a bar. He orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers. Orders a ueicbksjdhd.

    First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone.

    1. Definitely needs a keyboard alert/warning! All too real.

      Anyone else here notice that the suggested solution is more or less implementing perl’s “die”? (or newer languages copy of same) Good to see it back ported.

      Printf debugging has saved my bacon far too many times to count, and anything that makes it easier and thus more likely is a good thing IMO.

      But yes, about the race conditions – you really do have to build a good model in your head of what can go on when there’s more than one thread, no one promised that’d ever be easy – and if you believe that all software is too complex to understand, you might do better to take up another occupation. It’s only true if you code at the screen and futz around till it seems to work. If you actually _design_ your code, it seems you’d understand it. Coder != systems architect.

      There are things faster than printf…toggle a pin if you’re not on some major opsys that keeps you off the hardware.
      Scopes are still useful in this day and age. While you can’t avoid all race conditions – it’s not even possible in hardware (see metastability), you can cut the likelihood way down.

  4. I’ve mentioned it on HaD before, but software companies could have hired my 3 year old (at the time) daughter to test their products. Her lightning fast random key and mouse clicks often had my computer screaming “Uncle!”

  5. Many years ago, I worked for a small computer firm [anyone recall “Q1”], that had a software tester that I nicknamed “Professional Idiot” – if the input field asked for a number in range 1-10, he would enter “AbCd” or similar just to verify the erorr checking routines –

    1. Remember the OSX bug that allowed a user to create a new root password? Calling out the personal faults of your testers is a sure sign of a poor developer, you should cherish their abilities instead of denigrating them, I bet he saved your butt more than once.

      1. I meant the term as a complement to him, and often was amazed by the strange results of his non-standard input keystrokes – he found many a bug, especially when the input field could accept numbers with exponents. The programmers alternated between cursing the rewrites his testing necessitated, and laughing at the strange effects of his testing inputs.

        1. I got where the nickname came from, and I was sure it was meant to be a compliment too. It reminds me of the saying, “Build it idiot proof, and somebody will build a better idiot.” And that guy seemed to have been really good at his job at creative QA. Anybody could bang a random combination of letters into a number field, but entering 10E100 to see if a googol breaks the program takes a bit more creativity.

  6. I use a void crash(char *format, …) function, but I also preface every output line with a timestamp, sometimes in microseconds. Very useful extra information, at least for the sort of real-world interfacing stuff I do.

  7. This is why *every bug* users find in software needs to be taken care of. Don’t brush it off as ‘unimportant’. A bug is a bug, if you’re struggling with something you consider a serious bug – take a break and fix the reported ones you’ve deemed ‘unimportant’. Never know when one of those may be the cause of the problem you haven’t been able to fix.

    1. ^^ This. Unfortunately.

      That last (or the one before last) 50-million user compromise at Facebook? Put together by combining three entirely innocuous bugs.

      There has to be some kind of birthday paradox with the number of trivial bugs combining into a serious exploit — the combinations increase exponentially in the number.

  8. Undefined behavior propagated through bool is one “should never happen” case. For example library function like:

    int return_something(bool first_item)
    {
    return array_of_stuff[first_item ? 0 : 1];
    }

    can crash if anyone calls it with uninitialized bool variable. You might try to do something like:

    int return_something(bool first_item)
    {
    int index = first_item ? 0 : 1;
    assert(index == 0 || index == 1);
    return array_of_stuff[index];
    }

    and compiler will happily optimize your check away.

      1. Yeah, can’t figure it out. I can’t see how it would crash. It’s just undefined behaviour. The data in the address of the bool might anything. The compiler might do anything to it, and interpret it in any way it wants. Could be one bit per bool, one byte per bool. it’s determined by platform, cpu, endianness, compiler and optimisation flags.

        And yes, the compiler likely should optimise the assert check away. The integer state is defined as an absolute 0 or 1 by the ternary. It has no reason to be there. If you need a check like that I would be deeply concerned.

        What’s more likely to happen here is that it will work just fine and consistent on your dev machine in debug mode, which typically 0’s or 0xdeadbeef’s the regs, but in production it will break every Wednesday and when you wear red socks. Sometimes a register stuffed with an unexpected value > 0 just isn’t “true”.

        I prefer the examples in the article’s posed corner cases. I’ve encountered these exact issues in production before. So much truth.
        (i’ve also had fun with uninitialised booleans too. So much pain)

  9. “Worst case, we never get there, and instead, we keep on implementing workaround after workaround, spinning in circles, with the next bug just waiting to happen.”

    You just described a group of programmers at my last job. To be fair, management set us up in a sort of battle royal so they were gaming the bug metrics to look more productive.

    Anyone debugging a problem should follow the mantra “follow the data”. A well placed assert can give you a choice where you can break in with a debugger and start following that data. Maybe a library is performing out of spec, maybe someone is fuzzing an interface, or maybe someone started the inevitable descent into spaghetti code by linking to your function from one that was written to a different set of specs or is just plain sloppy.

    1. Things get far more complex than that when you are debugging the worst case response in a web application with tens of thousands of users, or when you are writing a driver for a piece of hardware that violates its specifications once in every 10 million operations.

      You can be the world’s most careful programmer, and your code can pass every imaginable test for quality and standards compliance, and yet the code still just falls over and dies in production. Dynamic behavior is far more difficult to debug because most debugging tools fail to work, and puny human intelligence is far too primitive.

      1. ^This. Human beings can’t cope with complexity. We are effectively monkeys at typewriters and non-trivial software only gets close to bug-free by endless testing over time, so when software has a short life span, it’s almost guaranteed to be of poor quality. How long has Android been a work-in-progress?

  10. There is really no substitute for fancy real-time hardware tracing. High-end ARM dev boards have a connector for such a beast. These suckers cost upward of ten grand but there is literally no other way to figure out what is really going on in there.

    1. I used to work in x86 in an environment with similar capabilities and those are great so long as you can reproduce the issue on the bench. Debugging an issue from the field can be a totally different (and difficult) animal.

        1. “might” is the operative term — had so little luck with RH support services that we decided not to re-up this year. Scrubbing an SoS report of security information is a pain in the butt, also.

  11. I can recall that, when I was writing DOS commercial (and personal) software in the ’80s & early ’90s, I wrote a library that allowed me to watch the values of my variables on a second (monochrome) monitor. I’m pretty sure that, at least at the beginning, I was *literally* the only person in my entire city with dual monitors.

  12. When I worked QA/test for a company that built high-end tape libraries, i was assigned to “corner-cases” — things that rarely/shouldn’t/cant’ happen in real life. Things like loading tapes in all the slots, the robots, and the drives, just to see how the system would react. The robot spent the next 15 minutes pounding its head into into random slots and trying to figure out what was going on.

    Another time, I was tasked to flood the command buffer to see what subsystem would crash first. That time, I broke the pully off the drive and unspooled about 50 feet of steel cable into the bottom of the robot cabinet – -sure didn’t expect that one to happen.

    Unfortunately, while I was good at finding bugs, my bosses thought I should be a coder as well and fix what I found. The company didn’t know that there was a difference between a coder and a tester. Just because I broke something and could report accurately what was happening didn’t mean I had a clue what to change to prevent it from happening.

    1. Me oppsite: having worked 5-ish years in the team designing and coding part of an ATC-VCS (air traffic control voice communication system) I shifted to QA/Testung.
      Why?
      Because as a dev I wasn’t allowed to implement some of safety oriented designs/precautions. So I knew where shit was going to hit the fan.
      I was a successful system Integration tester, flooded the bug report ticketing system so the employee had to buy another one able to handle higher volume of bug reports ;-)
      Having knowledge of the guts (at source code, and not only the one I did write) allowed me to write bug reports with hints downto line numbers, if not with also source code fixes too >:-)

  13. Let’s not forget the more insidious errors like Turbo Pascal 7’s infamous ‘Runtime Error 200‘, which is ultimately caused by trying to fit a 32-bit number into a 16-bit variable, but only if the number was too large to fit within 16 bits, which was only partially hardware dependent and so wouldn’t always happen. To make it even more ‘interesting’ (in the Chinese proverb sense), while the CPU properly reports it as a ‘divide overflow’, TP reports it as ‘divide by zero’, as the TP devs (mistakenly) thought that the only time that it could happen was if the user was trying to divide by zero. Imagine the fun of trying to debug THAT little intermittent, misreported gem, especially since TP7 was released in the years before the internet was widely available!

  14. I pass on before attempting to learn C ,I can count all I ever wrote on my hand.. Machine languages such as 8085, has 8 software interrupts. Having not filled the 32k of EPROM space, or in my case 128k of bank switched Flash, all the unused space is filled w/ 0xFFh, which is last Int.
    An errant program might land there, calling a stack dump routine, a DI, HLT.
    At least I have an idea where it came from.

  15. Asserts are great. Until some idiot start to use asserts for all possible problems that they think shouldn’t happen, or that they don’t want to handle.

    I’m the only one left of a team that once created a roughly half million lines-of-code Windows software project, and I’m still dealing with some large libraries that were created years ago in Write-Only mode (i.e. nobody knows exactly how they work and they’re too complicated and not important enough to rewrite), and whenever any kind of error happens, the libraries assert (even in release mode) and the program loses all data. And errors that cause this to happen include UDP fragmentization or TCP connection resets caused by the system going to sleep and waking up, and many other situations that shouldn’t even require user intervention.

    ===Jac.

  16. If you have a variable that will never go negative, make sure to set it as unsigned. Then you get 2x the maximum value stored in the word. Same deal if the number in it will always be negative. Set it as unsigned then write the code that reads it to always interpret it as a negative value.

    If you have an absolute upper limit for a variable, make sure the code either caps the maximum value that gets written to the variable, or ignores values higher than the maximum. If your code is allowed to put too large of a value in a variable, then other code reading it should not just read it as the maximum, it should reduce the value to the maximum.

    Want to see some examples of mishandling maximum values in variables? Download the official Mattel Skip-Bo game for Android. The player is supposed to have a maximum of five cards in hand, but sometimes it deals six. The goal involves building up four piles from 1 to 12 repeatedly in order to be the first to deplete their deck, which can be five to thirty cards. But sometimes the game fails to remove a pile once it reaches 12 cards. It leaves the pile sitting there until someone plays a 1 on it – sometimes. I’ve seen it stack up to 3 atop 12 before the game catches it and removes the pile. Unfortunately it clears the *whole pile* including cards play on top of 12.

    I see obvious bad use of variables and race conditions that are allowing uncapped variables to exceed their maximums of 5 and 12. *Somewhere* in the game code are the 5 and 12 limits, but *somewhere else* in the code is at least one operation that sometimes blocks the code that checks the hand and piles variables.

    Since Magmic (the company that produces the game for Mattel) also makes other Android card games, I suspect they’ve re-used some code from games that allow more cards in the players hand and discard piles, and didn’t make adjustments to make damn certain the counts of 5 and 12 could never be exceeded. I sent them an e-mail with my ideas about what might be causing the game’s bugs. They thanked me, said I was the first to offer any suggestions. They know the game has those bugs. Three updates later they’re still not fixed. Was playing it today and had a pile stack up to 12 and allowed to sit instead of being cleared.

    Skip-Bo is based on a game called Spite and Malice, which can be played with 2 or more decks of regular cards, using Kings and Jokers as the wild cards. If you want to play with decks of 20 or 30 cards per player with 4 or more players, would be best to use 3 or 4 decks.

    Skip-Bo just changed the Ace through Queen to be 1 through 12 and made the others “Skip-Bo” cards.

    1. Must disagree with the part about “using an unsigned for negative numbers” … yeah it sounds nice but it is not intuitive, and it will involve mixed signed + unsigned arithmetic, and signed / unsigned comparisons… go figure it out when something goes wrong ! Think of the others programmers of the team using it properly … maybe 5 , 10 years after the original design ? No thanks.

  17. Wow . I am amazed. Can’t thank you enough for this. I completely am happy shock. Maybe like you said. Computerize isn’t for me. I sold my lap top to a friend not knowing he would to something like this to me. He brough it back saying it didn’t work, thank u very much and I’m glad he brought it back or else I wouldn’t be able to fix my life again. Thank you again brother. May GOD bless you today and every day.

  18. I had a similar structure crashing today, the funny thing was that half of the code accessing the time_t thought it was 32 bits and the other half thought it was 64 bits… the values after the time_t were almost never correct…

  19. After an update to Warcraft 3 that was pushed out on Jan 22 2019, if I play a particular map (Enfo’s FFB), it will sometimes set the default name for computer players to FFB’s minions in everything (even other maps, both in single player and online in battlenet!) after that game ends. I had never heard of such a thing happening in any other game in my life and I’m not an experienced enough programmer to be sure of why that’s happening. I guess a variable that isn’t supposed to be writeable is somehow being written to. Quitting and reopening Warcraft 3 fixes it until the next time it happens and it doesn’t harm anything other than looking out of place.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.