Creating Black Holes: Division By Zero In Practice

Dividing by zero — the fundamental no-can-do of arithmetic. It is somewhat surrounded by mystery, and is a constant source for internet humor, whether it involves exploding microcontrollers, the collapse of the universe, or crashing your own world by having Siri tell you that you have no friends.

It’s also one of the few things gcc will warn you about by default, which caused a rather vivid discussion with interesting insights when I recently wrote about compiler warnings. And if you’re running a modern operating system, it might even send you a signal that something’s gone wrong and let you handle it in your code. Dividing by zero is more than theoretical, and serves as a great introduction to signals, so let’s have a closer look at it.

Chances are, the first time you heard about division itself back in elementary school, it was taught that dividing by zero is strictly forbidden — and obviously you didn’t want your teacher call the cops on you, so you obeyed and refrained from it. But as with many other things in life, the older you get, the less restrictive they become, and dividing by zero eventually turned from forbidden into simply being impossible and yielding an undefined result.

And indeed, if a = b/0, it would mean in reverse that a×0 = b. If b itself was zero, the equation would be true for every single number there is, making it impossible to define a concrete value for a. And if b was any other value, no single value multiplied by zero could result in anything non-zero. Once we move into the realms of calculus, we will learn that infinity appears to be the answer, but that’s in the end just replacing one abstract, mind-boggling concept with another one. And it won’t answer one question: how does all this play out in a processor? Continue reading “Creating Black Holes: Division By Zero In Practice”