Ask Hackaday: Is There A Legit Use For Operator Precedence?

Computing is really all about order. If you can take data, apply an operation to it, and get the same result every single time, then you have a stable and reliable computing system.

So it makes total sense that there is Operator Precedence. This is also called Order of Operations, and it dictates which computations will be performed first, and which will be performed last. To get the same results every time, you must perform addition, multiplication, power functions, bitwise math, and all other calculations in a codified order.

The question I’ve had on my mind lately is, does this matter to us or just the compiler?

Which Would You Do?

As I was banging out some microcontroller code last weekend, I started looking at the number of parenthesis I was using. See, I like total control over what this computer etched into a shard of glass is doing. So I don’t depend on precedence, but this made me wonder if I’m doing it wrong. So I asked on Twitter which of following lines of code people would use:

a |= 1 << 1 + c;
a |= 1 << (1 + c);

It is not surprising that everyone grabbed a torch and pitchfork in support of choosing parens at every opportunity. The consensus was that the next person reading the code will have a much easier time and will understand your intent. And that next person is more often than not you — how embarrassing if you can’t work out your own intent. Do yourself a favor and use parenthesis!

He’s Unhappy with Precedence, and He Wrote the Language!

In C it’s easy to understand how this is all built into the syntax. The equals sign assigns a value to a variable, so equals needs to have really low precedence otherwise that assignment will happen before the operations are performed. And function calls must happen before any other operations so there is actually data available to operate upon. It works. I rely on precedence in these two cases and don’t (necessarily) place everything to the right of an equals sign in parentheses. But not every operator is this easy to rely upon.

The funny thing is that these rules didn’t spring to existence at the start of computer languages, but were developed alongside them. The footnotes of Wikipedia’s order of operations article yields an interesting tidbit from Dennis Ritchie, creator of the C language, in his book The Development of the C Language:

Today, it seems that it would have been preferable to move the relative precedences of & and ==, and thereby simplify a common C idiom: to test a masked value against another value, one must write if ( (a & mask) == b ) ... where the inner parentheses are required but easily forgotten.

In addition to a good chuckle, this article also taught me a new term: infelicity.

Is There a Legit Use for Operator Precedence?

So the big question remains. Why do we teach operator precedence in computer science if popular opinion is almost universally against relying upon it? Is there more value than just base understanding?

The most legitimate use for trusting the compiler to follow the same invisible rules you have in your head is the International Obfuscated C Code Contest which is currently open for entries. That contest’s goal is to produce the hardest to read code and “To show the importance of programming style, in an ironic way.”

But I wonder if there are other interesting uses like writing polyglot code, or compiler specific code. Let us know in the comments below.

111 thoughts on “Ask Hackaday: Is There A Legit Use For Operator Precedence?

  1. Language grammars have to define some kind of precedence. It also makes sense to explicitly specify that precedence in documentation, so that other compilers of the same language will generate the same output. A student on a CS track (as opposed to a software engineering track) should certainly know how to define a grammar and build a compiler.

    Those of us from more of a software engineering background at least need to know about it, even if we haven’t memorized all the precedence tables for our chosen language. If nothing else, we’ll need to debug code that wasn’t as explicit in its use of parens.

    Learning about precedence is here to stay, unless an RPN-based language takes over the world. I would personally like to see that, but I’m weird that way.

    1. Couldn’t you mutate lisp to and from an rpn-based format? Or would that require all lisp functions to have a fixed number of arguments?

      I am not a lisp fan, but I believe it’s operators are prefix, like ‘+ a b’ rather than infix, but pre-fix to post-fix aka rpn should be feasible…

    2. Decades ago I wrote a compiler for the Amiga. It had no precedence even in assignment.

      A + B -> C * 8 -> D for example

      I found it ever easy to write for, but require a little more thought to write the statements.

      PS. On the Amiga there was a character “->” so I was not using compound symbols.

    3. The old 1980’s symbolic math package MuMath, which was just a self-hosted layer called MuSimp on top of the underlying MuLisp, actually exposed and let you set the precedence for all of its operators. Each operator had a “property list” number for Left Binding Power (“LBP”) and Right Binding Power (“RBP”), as well as special INFIX and PREFIX settings. Higher numbers bound first. You could examine the MuMath source, change them at will, or create new operators.

      (In practice, the common operators of + – / * were already assigned, and changing them would turn the current system into goulash, so you would have to write an entire extra language layer on top of the current one. Not much room in 64K CP/M.)

      See section 13-25 and 13-29 of the following:

      1. Agreed–and I’m curious as to why no language that I know of works like that. No time to pursue it right now, but since even primitive interpreters and compilers supported precedence, I wonder if requiring parens would actually make the grammar more complex.

          1. I meant a language that had no precedence and forced you to use parens to specify the order of operations, rather than an implied left-to-right order.

            I know of Smalltalk (also MUMPS and APL which others have mentioned).

            C-family: a * x * x + b * x + c // multiplication has precedence over addition
            Smalltalk: a * x * x + (b * x) + c “needs parens to make the multiplication happen first”

            In the hypothetical language: ((a * (x * x)) + (b * x)) + c
            As [Lee] suggested, not having enough parens would cause an error.

      1. No, you are confusing “fast to type into a keyboard”, with “speed to program”.

        Einstein said make it as simple as possible, but no simpler, It is getting into “too simple” when you cut out the brackets and parentheses to sacrifice elementary readability for typing speed (not even coding speed).This is the coding equivalent of being nastily passive-aggressive towards future debuggers.

        Just because you typed it quickly into the computer, doesn’t mean the computer is programmed, and it certainly doesn’t mean the code is easily debuggable.

        If I sprinkle catnip onto my keyboard, then point a laser pointer at it, then my cats will be the worlds fastest typists, but I wouldn’t want to debug their work.

        1. I think you misunderstood Shannon’s point. Shannon is saying that debugging IS part of programming, and that making debugging easier (by including explicit grouping) actually reduces the total time, even if it takes you a few extra miliseconds to type the grouping symbols. That’s the reason for his ‘non-trivial’ caveat. Leaving out grouping symbols only saves total time in very trivial cases. In other words, he’s saying the same thing you did.

      2. That’s the point. Code relying on precedence does not scale. It does not scale across the number if lines, the number if programmers, the number if years to support and the number of languages good code is ported to. All it provides, is faster code line entry.

  2. @frezik is correct about the necessity of precedence. There must be rules.

    That being said, after programming C for some 25 years, I put EVERYTHING in parentheses. Operation precedence may be unambiguous, but programmers’ understanding of it is not. It’s necessary to remember that you’re not just communicating with a compiler, but also human programmers (including your future self).

    1. There must be rules, yes. It isn’t necessary that the rules take the form of operator precedence. As TZ pointed out, APL evaluates everything from right to left. (That has implications, such as 2 + 3 * 5 != 3 * 5 + 2.) Another possibility is that grouping is mandatory and any line of code with more than one operator which doesn’t include grouping is a syntax error. But I fully agree with you that just because precedence rules exist doesn’t mean it’s a good idea to depend on them rather than on explicit grouping.

      1. Well but most people also learn basic arithmetic in school and languages that don’t respect those rules are royal pain in the butt to use unless they make the evaluation order explicit by design (e.g. Lisp). The operator precedence rules in most programming languages are derived from there and are not an arbitrary choice of the language designer. The compiler doesn’t really care which way the evaluation order is (as long as it is defined unambiguously) but the human behind the keyboard certainly does.

        So I would caution against getting too creative with this because fundamentally any computer code is mathematics – and people work with certain assumptions how things work there (e.g. that multiplication goes before addition). Anything deviating too far from the commonly accepted rules is only adding an extra mental effort to the programmer.

        And re explicit grouping – yes and no. As long as only standard operators (i.e no bitwise ops) are used, there is no reason to really do that. It only clutters the code with visual noise that makes the expression that much harder to understand. With bitwise operators, comparisons and similar, it depends – usually it is better to parenthesize to make the intentions explicit (or even better – don’t write long complex expression but split them up).

        1. Exactly. In doing math without a computer, one must know the order of operations and how parenthesis are used to make exceptions to it for things like forcing adding of some numbers to be done before multiplication.

          They’re the rules everyone is taught, and many learn, early in school. Creating a new way with a computer programming language just because you can is only going to make the language harder to learn and will confuse anyone attempting to read the code if they’re not familiar with the languages different ways.

          Good old BASIC is an ideal tool for teaching the concepts of algebra because it shows there’s real and practical uses for algebra – and actively shows how the math works.

  3. APL is a programming language that doesn’t have operator precedence. Simply because that language has so many operators that no one would be able to remember their precedence. Evaluation is always right to left.

  4. I think it goes back to mathematical notation.
    y = m * x + b. A*x*x+b*x+c.
    So if you see 2+3*5, it will mean do the multiply first.
    TI had algebraic calculators with precedence and parenthesis, HP used RPN. You can do everything with RPN – and there’s the Forth language. 5 3 * 2 +
    Since mathematical notation has the idea of precedence, it naturally carried over into FORTRAN and C.

    1. That’s what I came to comment. Order of precedence precedes computers. Every polynomial equation implicitly sets the multiplications before the additons/subtractions. So obviously it has meaning to people. What made things difficult was when the notation of mathematics, which extended beyond the expression of combinations of numbers, to the combination of numerical expressions with logical ones. I think C got this wrong. I should be able to say “if a < b and b < c" without parenthesis, with the "and" being done last, but that's not how C sees things. I can't count the number of times I've gotten burned by this.

      Hm. I just said "precedence precedes" and "C sees" in the same paragraph.

      1. It is true that algebraic notation forced precedence rules (after some long debate). The same rules have much more recently been adopted in some places when teaching arithmetic.

        I was wondering about order precedence versus operator precedence and wonder if you either one is sufficient. If you included algebraic expressioins, then you need to address cases where A*B does not equal B*A, both mathematically, and numerically as in finite precision of computer calculations.

        On the other hand, this topic has been flogged to death by compiler writing teams for half a century.

        1. It has definitely been flogged to death. Still, I’ll give it one more lash: when I’m writing an “if” clause, it always bothers me that I have to write it as “if (((a > 0) && (b > a)) || (a 0 && b > a) || a < -1). This is because I believe the numeric operators should ALWAYS have precedence over the logical ones. Why? Because you never ever need to do "a && b" when a and b are numerical. You CAN, but there's never a good reason to. The only things that should have a lower precedence than logical operations are parenthesis and assignment. Please continue the flagellation if you have a good contradictory example.

      2. I’m not sure why this wouldn’t be the case, since the relational operators , = have higher precedence than &&. So, the && will be done last in C, providing that the first test is true. C doesn’t process further than it needs to to get the result, so if you assume that both sides of the && will be evaluated, then that will cause problems.

        So, “if (a < b && b < c)" would be evaluated the same as "if ((a < b) && (b < c))". If a < b is false, then b < c will never be evaluated.

    2. Usually when i see precedence understanding issues it’s because of a ternary jammed in somewhere.
      var result = 3 + x == 0 ? 1 : 2 * 8;
      Which is it:
      A) 3 + ((x == 0) ? 1 : 2) * 8
      B) ((3 + x) == 0) ? 1 : 2) * 8
      C) 3 + ((x == 0) ? 1 : (2 * 8))
      D) ((3 + x) == 0) ? 1 : (2 * 8)
      IMO, always include parenthesis

    1. Most people would say ‘No’ because they learned the accepted order in math class. There’s nothing sacred about that order. If you (and everyone else) had been taught from childhood that addition/subtraction had precedence over multiplication/division, math would still work just fine and you’d find our current system to be counterintuitive and puzzling.

      1. Just had this discussion today. Once you know algebraic notation, it seems natural and simple. But most people forget that they spent between six and eight years learning it, prior to actually learning algebra, which is another couple years before getting to calculus, statistics, and other `higher’ maths.

        Hindsight being 20-20, there are other notations that are more consistent, and just as easy to use– the notation we use grew to be what it is because it is useful, eliminating the need for lots of extra parens and conscious analysis of expression structure, despite the inconsistencies (left vs right associativity, multiple forms for some operations, etc). Compare ((a*b-c*d)^(1/2))/((a*c+d*d)^(1/2)) to \frac{\sqrt{ab-cd}}{\sqrt{ac+bd}}. Which is easier to grok, knowing the system?

        There are many other equivalent systems, some of which are more consistent and easier to learn, but the one we have works quite well.

        The question becomes: should the languages we use for programming reflect, at least to a reasonable approximation, our prior structures?

        1. “The question becomes: should the languages we use for programming reflect, at least to a reasonable approximation, our prior structures?”

          The answer is simply yes, you gave a precise example in your first paragraph. Imagine if calculus and statistics followed completely different notations, it would slow down the learning process and create problems for people trying to remember which notation to use for which specialty. The thing about the system that is currently taught is that it is the system that is easiest to convey over time to developing minds. The vast majority of those other systems are actually based on the original system and solve problems that our adult minds can understand and comprehend but if you would teach those to developing minds you would most likely lose a lot of the kids along the way.

          Also your example equation structures don’t work for your example as most children are taught the square root symbol before they are taught that the square root is a fractional exponent. They are also taught that if the variables are declared then you do not need the multiplication symbols but those variables should be explicitly declared as in your second formula AB could be one variable instead of A*B. Finally and i know i am being pedantic but those equations are definitely not the same…. in the first one the denominator has the term a*c+d*d under the square root sign and in the second it is ac+bd.

      2. If you are young enough. In school I never had arithmetic order “precedence”. In fact I did not know it was a thing until I started teaching math and physics at 50. I was surprised that conventional computer language ideas have percolated back down to grade school and changed the way arithmetic is taught. Also text book dependent in the 20th century.

        1. “In school I never had arithmetic order “precedence”.” Are you sure about that? What did ax² + bx + c = 0 mean to you, then? For several centuries, this has meant ((a * (x²) + (b * x) + c)) = 0, very distinctly different from ((((a * x)²) + b) * x) + c = 0. You just learned that so long ago, you didn’t remember learning it. I’m not saying you’re old, I’m just saying it becomes reflexive after a while. I learned it a number of decades ago, long enough that exposure to computer programming wasn’t a pervasive thing, but I do remember learning it.

          1. Okay, well, it never comes up until you’re writing equations, which is introduced in algebra, so yeah, it just doesn’t come up in basic arithmetic. How could it be? Each operation is presented as a separate problem. Did you manage to get all the way through high school without taking algebra?

          2. Okay, Comedicles, that’s a good example. So if you learned how to solve that, someone must have defined the order of precedence for you, even though of course they didn’t use that term.

  5. The day you miss type
    if ( data & mask == 1)
    if ( data && mask = 1)
    You will pray for the language definition not to reject unparenthised code
    Of course that would not prevent errors such as
    If ( (data && mask) == 1)
    Unless the language is highly typed.
    I trust a safe language should never do anything you haven’t specifically written including strong typing and no hidden type casts
    Programming speed is _IMHO_ far less important than reliability and maintainability

  6. Shame on you!!! Your poor algebra teacher is surely turning over in their grave (if they are deceased) <– condition is precedent upon action!!! LOL Precedent is natural to humans so – you crawl before you walk etc. So if you want to change the natural order of operations you must have a way of indicating your new order of operations. Using parenthesizes to denote your new order of operation is extremely helpful during code debugging since it is easier to work from innermost operations defined by parenthesizes to outer most result.
    just my $0.02

  7. Remember that C was developed when screens only had precious number of characters in the width. 80 x 25. Every character counted. The norm was to eliminate all superfluous characters. So it was normal to use precedence instead of overuse parens. We have the luxury of multiple monitors and small fonts today, so to we have a luxury to be more verbose with our code.

    1. Your timeline about the development of C is off a bit. C was developed on systems which used 45 baud teletypes, not screens. Elimination of superfluous characters was even a bigger deal

        1. Yes, which they unconsciously do anyway. You have to mentally stack operators in infix. If I say please calculate 27 * and stop to yawn, what are you going to do? Just like infix calculators. They do nothing when you enter the operator and wait till you hit “ENTER”, a key you don’t need with RPN.

          1. “hit “ENTER”, a key you don’t need with RPN”. So, what key do you hit after the first number? RPN uses “enter”, infix uses “=” at the end. Both serve the same function.

          2. Dang. I meant =. [4 enter 3 times] versus [4 x 3 =]. In the Forth case ‘enter’ is an action. It pushes 4 to TOS.

            [4 x (3 + 2) =] is a couple ways in Forth like [4 enter 3 enter 2 + x]. Or [3 enter 2 + 4 x], which is only 6 keys versus 8 for infix (and the way a good compiler does it). And the “advantages” grow with complexity, but it takes a lot of practice.

            There is a remarkable book from the peak times for HP calculators, and just before personal computers, by John A. Ball called “Algorithms for RPN Calculators” that has everything in it with analysis to minimize keystrokes and even a section on the ideal calculator. I wish there had been room for derivation and description of the algorithms mathematically. I don’t think a lot of people bought it because of the timing. It is a heck of a resource

  8. Personally, I use parenthesis where the order is not immediately clear. e.g. With pretty much all bitwise or bit shift operations that are more complex than two variables / constants, because I don’t expect that one has a higher precedence than another; or when combining boolean And and Or conditions (e.g. if ((a && b) || c) { doSomething(); }). For basic arithmetic they’re not needed, and nor with comparisons, even when there are multiple comparisons and/or multiple operations on one or both sides of the comparison. When things get long I’ll start adding them, or even better, adding line breaks or temporary variables.

  9. C was developed at Bell Labs who could afford whatever equipment they wanted. In this case, it was on VT-100 CRTs which displayed more than 80 columns. As for precedence, I hope that all of you live long enough to use life support equipment where the programmer disdained precedence and made a mistake!

  10. The confusion in coding comes from it’s singles line use. Maths has subscript and superscript.

    Your example ,1 << ( 1 + c ) is actually

    1 subscript [base] * base superscript [1 + c] where base is 2

    Obviously 1 + c has precedence as it's all superscript.

    1. “Parenthesize defensively”
      If only that were always possible. The other week I was trying to do something ever-so-slightly complicated in OpenSCAD, and discovered that it doesn’t DO parentheses in expressions! I had to use intermediate variables for things that really shouldn’t need them.

      1. Really? I just tried and

        ((1+2)*3+1)*2; result 20
        (1+2)*3+1*2; result 11;
        1+2*3+1)*2; result 16
        1+2*3+1*2; result 9

        in assigning a variable, and as a dimension of a cube.

        with version 2015.03-3

  11. Regarding ()’s and readability, is there a way (say in Linux World and unicode) to use a dot for multiplication instead of a frickin asterisk? Maybe some auto-substitution? Or a setting in an IDE? Think of the children, and the generations to come.

        1. You could either still type ‘*’ and have the IDE display it as the proper character (U+22C5 DOT OPERATOR), or maybe type a combo like Alt-.

          Code stored in text files is probably not going away any time soon, but it doesn’t need to be presented to humans that way. That is also my answer to the great tabs versus spaces argument–the IDE should be able to show the user the code in their preferred manner, including things like showing ‘*’ as a dot when it represents the multiplication operator, while saving it in the manner understood by the compiler and other tools. If the user wants the language’s exponentiation operator hidden and the right-hand side displayed in superscript, why not? If someone is unhappy with Python’s lack of braces, the IDE could display braces where they would be in a language that had them.

  12. Operator precedence is only a half the problem. The other half is operator associativity.

    1+2+3+4 is commutative. It evaluates to 10 no matter which ‘+’ operation you evaluate first.

    1-2-3-4 non-commutative. It can evaluate to -8 (((1-2)-3)-4) (left-associative), or -2 (1-(2-(3-4))) (right-associative). It can also equal 0 (1-2)-(3-4).

    A compiler or interpreter has to choose an order of operations when it evaluates expressions, and having 1-2-3-4 return different values in different contexts is embarrassing. Operator associativity and precedence are side effects of the effort to make the same sequence of characters evaluate the same way every time.

    In practical terms, the whole subject can be reduced to saying, “let’s spend decades evolving software that’s so hard to design, understand, modify, or debug that we have to use other software tools just to write the code. Then let’s elaborate that notation so much that the spec to generate a compiler becomes an impenetrable mudball of its own. As payoff for all that effort, programmers will be able to use a form of notation they’ll screw up on a regular basis and waste countless hours trying to fix. That’s *much* easier than getting programmers to use RPN — whose interpreter can be assigned as homework for first year CompSci students — because they consider that ‘unfamiliar and hard’.”

    1. Associativity only comes up when your compiler is trying to optimize what you told it to do. There are plenty of cases where compilers optimize things based on the assumption that mathematical operations return exact results, which of course they can’t, as soon as you start using floating point arithmetic.

    2. Every computer language springs to life when somebody decides that existing languages don’t really express his thought process, or what he imagines that process ought to be. There are two major approaches to translating human thought into computer procedures: 1) trying to force computers to work more like humans think, and 2) trying to force humans to think more like computers work. Forth is what you get with the latter, BASIC with the former, and C when the developer just can’t make up their mind. Does this cause holy wars? Yes, mike stone, yes it does.

  13. Smalltalk prioritizes unary message patterns over binary patterns, and keyword messages last. Parens are used to specify order of ops otherwise. There are no “operators” as such in Smalltalk; any method will do. Instead of asking whether or not we “need” precedence (with is dependent on grammar, and we do need grammar) you might ask: what is an operator, and do we need a special noun to describe something that’s just a function/method?

    In Smalltalk, the following evaluates to -3.
    1 – 2 * 3

    As long as we know that what we really want to write here is “1 – (2 * 3)” we’re fine. This will trip up newcomers, but ultimately that’s a matter of learning the language. My understanding is that Smalltalk’s designers were influenced by APL’s lack of complex precedence rules, and thought that it was better for a programmer to explicitly state intent than to rely on clever grammar rules to do the thinking for them.

    My own two cents is that I’d prefer to be w/o complex precedence rules, because it reduces the complexity of the grammar to be rid of them. Less complicated grammar means less cognitive load on the learner. The only challenge is ingrained assumptions about the ordering of terms in algebraic notation.

    1. “you might ask: what is an operator,”
      In historical C, that’s simple: an operator is a primitive — it represents one or a few opcodes that do the corresponding operation. It’s completely different from a function call, which does just what it sounds like — pushes arguments on the stack and jumps to the function. In a systems language which is basically just a portable shorthand for assembler, that distinction is vital. There’s places in systems programming where you don’t want to be making function calls at all, and places where you can, but need to be mindful of the performance hit. (Of course, things like inline functions, whether user-specified or generated by compiler optimization, blurred this initially clear distinction.)

      “and do we need a special noun to describe something that’s just a function/method?
      But if an operator really is just a function/method, that means you’re using a newer language where C’s distinction doesn’t make sense.
      It’s incredibly unfortunate that so many language designers unthinkingly copied C’s syntax without understanding why it was made that way. If you’re going to write a high-level language where everything is burdened with function calls or equivalent overhead, your syntax really should make sense with that, instead of keeping a now-arbitrary division of functions into “functions” with explicit argument lists and “operators” with implicit left- and right- arguments.

  14. A pretty stupid and poorly thought out article. Yes we need this and there are many ways to do it, it is all about design choices by whoever writes the language. A ripe field for argument and disagreement. I guess the idea of the article was to toss the ball in the air and watch the fun in the comments.

  15. Watch out for which language, and if optimization is on or off.
    I had issues way back with a C compiler and an IEEE engineering package.

    floating point representation

    A = B*C – B*C
    A = (B*C) – (B*C)

    One would expect the same result.
    The first case leaves one of the B*C results in the CPU and operates against the other that was thrown out into a register. If the CPU has extra precision that the register doesn’t, you may not get zero, but a tiny amount, but only for some values of B and C.
    In the second case, using the () can force each B*C result out into a register before performing the subtraction. Always a zero. Unless optimization is on, then it doesn’t force the last B*C result out into a register, and you may get a tiny result instead of zero.

    Isn’t variable behaviour fun!

    Of course a truly smart compiler does A = 0.0

    Several studies have shown that over the life of most code, significantly more will be spent maintaining it than on the original writing. So $ wise for total cost, it’s worth the cost of some extra time for readability, both for original debugging and for the usually inevitable maintenance down the line. The problem is where to draw the line.

    1. This isn’t a problem with languages or compilers; it’s a problem with numerical calculation in general. I use floating point arithmetic only when really necessary, and then only when I can guarantee that rounding errors aren’t going to cause trouble. Don’t ask me why.

      1. Yes it’s a general problem, but it’s a compiler too when the result can be different for code that looks like it should produce identical results.

        Might you have been checking for a zero instead of zero +/- a tolerance?

  16. This makes me wonder what Microsoft’s programmers didn’t do for many years with Calc.exe from it’s first appearance in Windows all the way through early versions of Windows 10. You could get the square root of an even number where the square root is also an even number, then poke + then = *and get an incorrect result* because while the *display* was correct the number calculated internally was slightly wrong. For example enter 4 then square root to get “2” then + then =. The result will NOT be 4 in a version of calc.exe with the bug.

    A version of Calc.exe with that bug fixed was included a while ago in some update to Windows 10. What was done to fix it? Are there any other long standing calc.exe bugs that have recently been fixed?

    Dunno if it was also updated in 8.x. Most likely has not been fixed for Windows 7 and I’m pretty certain they haven’t bothered to produce an updated version for Vista. Absolutely certain Microsoft will never be releasing a bugfixed calc for anything older – but it would be cool if MS would drop a “Bug Reduced Calc.exe Pack for Windows 1.0 through 7”.

    1. Your answer is in these two blog posts and the handful of things linked in them.
      TL;DR The calculator used to use IEEE floating point in it, but people complained so they rewrote the engine to do arbitrary precision arithmetic for basic operations. No one really noticed, and instead started complaining that it generated small errors in sqrt. The team finally extended the engine to handle perfect roots so that error is gone as of early 2018

  17. The Forth fans have already spoken, but I’ll add in my heresy here: parentheses are a byproduct of the language / algebraic system used. They are not a necessary evil, they’re just evil. :)

    Here’s how it works in Forth. Get all the things you need to operate on together, then do the operation. And if you follow this rule, you will never need parentheses or operator precedence. Things simply go in the order they’re written in.

    3 * (1 + 4) only needs parentheses b/c we’ve defined “*” as being a look-ahead operator: X * Y. Because of the look ahead, you need to make sure that all the “aheads” happen at the right times and you need to argue about whether the “+” happens before or after the “*”.

    Contrast with everything else you do in life. For instance cooking. Get your ingredients together, then cook. Or carpentry. Gather up your tools and raw materials, plan, and then start sawing.

    Or as I like to say: 1 4 + 3 * . If you want to add two numbers together, throw them both on the stack first, then add. Multiply? Get your multiplicands (?) together first, then multiply.

    This is also why LISP is parenthesis hell: everything is delayed in LISP. It’s like the mirror image of Forth in that respect.

    Final nail in the parenthesis / op-precedence coffin? The ALU/CPU doesn’t have any of that nonsense going on. Get your values set up in registers, and then call the operation. Forth and RPN just mimic this (logical? necessary?) temporal ordering, and remove a lot of hoop-jumping by doing so.

    Yes, I know you were taught math the counterintuitive way, where “+” could somehow look into the future. Sorry about that. It’s really pathological — there’s this system of writing down numerical operations, but it doesn’t go in the order (left-right) that you write things down. Instead, there’s this set of arbitrary conventions about the ordering, coupled with some extra symbols whose sole reason for existence is overriding that ordering when it’s not what you meant.

    Wake up, sheeple!!

    (I’m only half joking. It’s really worth the exercise to wrap your head seriously around a different worldview, if only just to recognize that the one you grew up with as “normal” is in fact truly bizarre.)

  18. MS Excel has an interesting bug with precedence. Try =-2^2 and it will give you 4. It should give -4. I came across the bug when valuing options using the Black Scholes Merton formula. I can imagine that many students and bankers have lost marks and money due to this bug.

  19. i usually just spam parentheses everywhere when im not sure what the precedence is. i know the order of the standard math operations but get to bitwise logic and i don’t have a clue. so that part of the code is a parentheses salad. and different programming languages have different rules so parentheses is good practice if you use a lot of languages.

  20. As others have pointed out, operator precedence is language dependent. The examples given in the article, and in the pull-quote by Ritchie, are mainly about C-like languages (C, C++, Objective C, Java, C#, C–, etc), which inherited the operator structure of C.

    C has an excessive number of operators (48) and a large number of precedence levels (18). The levels of precedence are supposed to make it easier to write readable, compact code. And they do, to a degree. But it’s hard to remember all those operators and their relative levels. Ritchie even admits they got some levels wrong (it is possible that there is not an ordering that makes sense all the time; the “natural” binds-tighter relationship could be non-transitive). As such, it’s sensible to defensively add parenthesis just to be safe.

    Most of the other languages listed that people say don’t have these problems generally have much fewer operators and precedence levels. Without that complexity the result doesn’t require defensive parenthesation to make the result readable.

    Order of operations is useful; operators are useful, but C goes overboard in quantity, causing problems.

  21. I’m voting for using strict from left to right precedence.
    That will mean of course that a = b + c;
    the assignment will be done before the addition, and then you may get an unused code warning.

    How about random precedence?
    Every now and then you will get uncomprehensible errors in calculations because you forgot to add one of the 20 sets of parentheses in a simple calculation.

    Believe me. operator precedence is a good thing.
    Part of the reason for the complex rules for precedence in C are not for uses to be used, but for the compiler writers, to ensure code portability.

  22. I’m surprised nobody’s mentioned PostScript yet. It is a general purpose, stack oriented programming language using postfix notation. It doesn’t use parentheses. The order of operations will be obvious to those familiar with FORTH or RPN calculators.

    PostScript has some features for describing layout of text on a page, so it has become popular for this purpose. It is rarely used outside of the context of typesetting these days, but it is capable of much more.

    If a language worth learning makes you think a bit differently after you’ve learned it, then I believe PostScript is worth learning.

    It avoids the parentheses issue. Parentheses and order of operations issues only happen with infix notation.

  23. Interesting post and an interesting question posed. The only reason that I can think of to rely on operator precedence is to avoid the “tyranny of the parentheses”.

    We want concise code that is easy to read. That usually means that certain operations should be written as a single line of code:

    y = m*x + b;

    This is nice and easy to read, and looks almost exactly like the textbook equation.
    If we have no precedence beyond left to right evaluation, we would have two write it at least as:

    y = (m*x + b);

    But if there is no precedence at all, it needs to become:

    y = ((m*x) + b);

    Gets a little more opaque, but you get the idea. Those parentheses start to stack up, and unlike written math, most software languages have just one form of parentheses. You can’t use a mixture of round, square, and curly braces to improve readability!

    In a world of no precedence to only way to avoid parenthetical explosion in anything beyond a trivial operation is to do it in multiple lines:

    y = (m*x);
    y = (y + b);

    I don’t know about you, but that is a lot harder for me to read than the very first line.

    1. It’s always possible to get away from the “tyranny of parentheses” by using intermediate variables, but then you have the tyranny of intermediate variables.
      Two things:
      1) with strict left-to-right, there would never be any “how is the compiler going to do this” question, but it would be very easy to accidentally use the way we write equations by hand and get it wrong. Which I think is why the languages that define an order of precedence were designed that way.
      2) most text editors these days have parentheses matching highlighting of some sort, so even with dozens-deep sets of parentheses, you can verify you did what you intended.

    2. I think you are making things overly complicated.

      Adding redundant parentheses doesn’t change anything unless you have some unstated language definition in mind, redundant parentheses are simply stripped, so:
      y =(m*x + b) is equivalent to y =m*x + b
      y =((m*x) + b) is equivalent to y =(m*x) + b
      y =(m*x); y =(y+b) is equivalent to y =m*x; y =y+b

      If parentheses are supported operator precedence and evaluation order doesn’t matter for y = (m*x) + b and will be treated as y’ = m*x; y = y’ + b internally.

      TL;DR y = (m*x) + b is enough.

      1. In the post he specifically talks about assignment having the lowest precedence. If you don’t assume any operator precedence, you have to assume that you don’t know if assignment will trump some other operator. In that case, you would need parentheses to force the order of operations you intend. I realize that if you don’t assume that assignment is the lowest precedence, some statements start to have no effect, but I was trying to keep with the theme of the post.

  24. Mathematically, order of precedence/operations are merely to allow the use of shorthand marks for a series of additions. Every operation breaks down to that. The use of heuristics help to ensure that tthe results wwill be consistent aand accurate rregardless of who uses it. Applying this thinking to computer programming, all commands break down to a series of elementary instruction set. Also, given that each computer can have a completely different set of instruction set, consistency and accuracy can suffer if a standard is not in use. Seeing as how it’ll add too much already to the compiler/interpreter, parentheses are used to help reduce the the size of them since parentheses can be easily use to override precedence without taking up too much memory.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.