Code Wrong: Expand Your Mind

The really nice thing about doing something the “wrong” way is that there’s just so much variety! If you’re doing something the right way, the fastest way, or the optimal way, well, there’s just one way. But if you’re going to do it wrong, you’ve got a lot more design room.

Case in point: esoteric programming languages. The variety is stunning. There are languages intended to be unreadable, or to sound like Shakespearean sonnets, or cooking recipes, or hair-rock ballads. Some of the earliest esoteric languages were just jokes: compilations of all of the hassles of “real” programming languages of the time, but yet made to function. Some represent instructions as a grid of colored pixels. Some represent the code in a fashion that’s tantamount to encryption, and the only way to program them is by brute forcing the code space. Others, including the notorious Brainf*ck are actually not half as bad as their rap — it’s a very direct implementation of a Turing machine.

So you have a set of languages that are designed to be maximally unlike each other, or traditional programming languages, and yet still be able to do the work of instructing a computer to do what you want. And if you squint your eyes just right, and look at as many of them all together as you can, what emerges out of this blobby intersection of oddball languages is the essence of computing. Each language tries to be as wrong as possible, so what they have in common can only be the unavoidable core of coding.

While it might be interesting to compare an contrast Java and C++, or Python, nearly every serious programming language has so much in common that it’s just not as instructive. They are all doing it mostly right, and that means that they’re mostly about the human factors. Yawn. To really figure out what’s fundamental to computing, you have to get it wrong.

24 thoughts on “Code Wrong: Expand Your Mind

  1. My short response to this is —- Haskell

    For years I have worked with standard imperative languages like C, ruby, perl, python and they are really all the same, taking or leaving the object oriented paradigm.

    But learning Haskell has been something entirely different. The way it has jolted my mind out of well worn paths of thinking has been very refreshing, and has had benefits when I return to the good old imperative languages like python. Mind you both python and ruby have things taken from functional languages, but it is nothing like getting full immersion in something like Haskell.

    I think it was Gauss who said that in his old age he was going to learn a new language (human language) to keep his mind nimble. Or something of the sort.

    1. You may also be interested in building something in Verilog or VHDL. It’s also different kind of programming as they are not programming languages but hardware description languages. So you would need a simulator to run it on a CPU. However it requires a bit of different thinking and I think it’s interesting.

      1. You have to be careful going to VHDL from a background in C or C++, because you can write ‘code’ that does exactly what you want in an incredibly inefficient way. It’s very easy to introduce a lengthy logic chain that takes several clock cycles to evaluate.

        1. I was going to mention that. I chose to learn VHDL as it doesn’t look like programming and I didn’t want that distracting my learning curve. I never intended to use it professionally so I had a choice. As far as I can make out Veralog is US and VHDL is everywhere else. System Veralog is excepted in more places in the world but is still mostly US. Many old US companies have a lot of IP in VHDL so they want both skills in their EE’s.

          The other reason I chose VHDL is that I’m a very old person who started my career even before (IC) logic so I am very comfortable visualizing the meaning of what I am writing in VHDL. Veralog is more abstracted away.

          One other thing, Veralog (or System Veralog) has a nicer syntax than VHDL. Some aspects of VHDL syntax is a pain in the ass. Within a cluster, statements must end in a “;” unless it’s the last statement of the cluster and is followed by a “}”. So every time your adding to , or removing statements from , a statement list you have to move around this damn “;”.

      2. Doing Verilog or VHDL is indeed “waiting in the wings”. I have the free Xilinx tools and several of the scrap bitcoin mining boards with the Xilinx FPGA on board and every intention of doing something with them.

        Someday. I hope.

        1. I made the biggest mistake with HDL.

          Around the end of 88 (from memory) I bought a CPLD learning kit from a local electronics store franchise. It was marked “Vanris” and had a MACH II 32 macro and a MACH II 64 macro CPLD.

          I couldn’t get a license key for the software that was included from Vantis or the supplier and in the end I just gave up. A side note, getting a free license key for Xilinx or Altera is almost as much of a pain when I last did this.

          I gave it another try about 6 years ago, so I lost 28 years and that’s kind of infuriating still today.

          Unfortunately there aren’t many “new entries” into the CPLD / FPGA arena so you have to put up with their stupid practices.

          If you stuck on a “one day” thing because of the Xilinx devices in your possession then throw them out or at least into the junk box.

          I tell people either go with a small device with an open source tool chain or go to ebay and buy a simple CPLD or old small FPGA and start your HDL (once you have the software key).

          Don’t “wait in the wings” or it will pass you by.

        2. Do it! I spent only two weeks playing with Verilog and it was a lot of fun. What I learned has even helped with some regular programming too in cases where it was useful to implement something similar to how it would be implemented in hardware.

  2. We hear the words, coding, programming and language.

    In the early days the “language” was written in favor of the computer and not the human and the “language” really wasn’t extensible to better suite the human except perhaps some well thought out “labels” in Asm. Comments were your only opportunity to use more human like “language”. So this was really codifying. Taking a problem and describing the solution to the problem in predefined symbols “mnemonics” rather than what we would call “language”.

    Many of the next generations gave us a different and a more complex set of symbols to use. Like BASIC for example, the “S” in BASIC is for “Symbolic”. They didn’t however give us much more use of “language” name space except for some clever use of subroutine or function names in more recent versions of these old “languages”.

    Today, programming or more importantly about the use of “language” and is a compromise between immediately recognizable language (Code primitives and predefined functions) and the freedom of using the language “name space” to extend or create the language we want to use.

    For example some Javascript. We can create new words to use in the available namespace …
    currentArticle.Expand(“FULLSCREEN”)

    This is a creative way to use name space to generate “language” that has meaning to other programmers as well.

    Of course the same can be done with doted languages like Jacascript, curly bracketed languages like you favorite flavor of C, “::” Class structured languages or Object Oriented Language.

    How much of the name space your free to use varies quite a lot. Some languages have horrible primitives that steal away the name space you want to use differently to how it has been used and on the other hand, other first class citizen languages like LUA allow you to steal it back and use practically the whole name space.

    The there are other approaches like jQuery that use a preceding character ($) so that the framework has “ALL” the available name space to use as it chooses.

    So for me something like HTML and other stringy languages are “code” and things that allow more flexible use of name space are programming languages.

    The one real drawback of liberal use of name space is that some “coder” will royally screw things up with very poor use of the name space with something like DB.connection(“INIT”) that wipes the contents of a database.

    1. Is there something I’m missing about the comment written by ROB? Some irony or structural joke only accessible to real, heavy-duty programmers?

      Please enlighten me – what does ” …. Today, programming or more importantly about the use of “language” ” actually mean? What is a “first class citizen language”? What information does ” doted languages like Jacascript” actually provide unambiguously? What’s a “stringy language”?

      Wouldn’t it be wonderful if we all took the trouble to run our comments through one of the many spelling and grammar inspection packages before posting – at least there might then be a sporting chance that we could understand each other with slightly less low level communication noise.

      1. OK, Before I go though your questions.

        function get_next_prime_number(current_prime_number)
        {
        new_prime_number == …… ;
        return new_prime_number;
        }

        In the above only the words “function” and “return” are reserved by the language. The programmer has the choice to use the word sequences “get_next_prime_number”, “current_prime_number”, “new_prime_number” to explain what this function does by using human like language. All of these things, the reserved words and words chosen by the programmer fall into what is called the name space. Think of it like website domain names. There cant be two the same and there is an authority about which ones you can and can’t have.

        In programming the reserved keywords (and symbols) are the name space that the programmer doesn’t have access to. You can’t write a function called “print()” if this function already exists in the specific computer language and is therefore a reserved keyword, except in some first class citizen languages – more on that later perhaps.

        So when you have a program –

        10 a$ = “Hello”
        20 Print a$
        30 goto 20

        It’s clear what it will do but the only user selected name space is “a$” and this name doesn’t give us any clue what it’s for. If it were “the_string_that_is_to_be_printed$” then it would make more sense by better use of the name space. In a simple program like the above it really doesn’t matter that much but in a program that is thousands of lines long you don’t want to be calling things “a$”. The example above is BASIC and older BASIC languages limited variable names to only two letters or a letter followed by a one digit number.

        So as soon as your writing programs that are thousands of lines long you need to use the name space well not just so that you can read and understand it in the future but also so that others can to.

        So in essence you write an abstraction as a way to use more human like language to write the program.

        In the abstraction the names of the functions and variables and attributes give a very good idea of what the function will do and the function itself with handle all the nitty gritty of actually doing it on the specific platform it’s running on.

        So you might write a function named “DatabaseConnect()” which you write for many different platforms but even though the contents of the function may differ between hardware and software platforms, it is still used in exactly the same way as an abstraction.

        So your using the primitives and reserved functions of the computer language to create a human readable language as an abstraction that is independent of the underlying computer language.

        Or in other words you are using the available name space to crate a human like language driven abstraction.

        OK I’ll move on to your questions.

        First class citizen language – wikipedia can probably help you there. Basically everything can come down to one form or element of programming. A function can be a variable. Another function can be set to be that function as well just with assignment …. a=b

        A stringy language is something that defines something else but isn’t a language within itself. Examples HDL’s, XML, HTML, JSON, CSS

        Dotted languages are usually based on attributes, methods and events and use the dot “.” symbol to assign these to objects. They most commonly work within what is called a document object module (DOM). These can be used quite loosely so you can choose to use attributes that do not exist in the DOM to extend functionality.

        I hope that answers your questions. I think the term “name space” throws people off a bit. If you have done quite a number of different structures of languages then these things are more significant.

    2. I’ve long felt that developers (and recruiters) that describe themselves or others as ‘coders’ are a strong shibboleth to not being very good at it. The only coders left are the people doing medical coding, e.g. billing codes for procedures; people writing software programs are software developers, and should have the good sense to use words more carefully.

      I attribute at least some of the explosion of the ‘coder’ usage to the ‘Everyone should learn to code’ trend, and bootcamps: https://twitter.com/codinghorror/status/900932774638403584

      1. I agree, in that context the word “coding” is cringy. As a hiring manager i assume it means you can kludge something together and don’t have an understanding of the underlying architecture and zero knowledge of the software dev lifecycle or devops. That said, i always talk about “writing code” with my colleagues and friends.

  3. > and that means that they’re mostly about the human factors

    All the commonly used languages are about the underlying architecture, not about human factors. When C first came out it had pre/post increment and decrement, which was a direct reflection of the underlying architecture.

    For most languages, the syntax is architecture and everything else is left for people to implement as libraries. As a result, it’s nearly impossible to port a program from one architecture to another because the libraries are different. Consider as an example writing a program (in C, for example) that lists the files in a directory, on windows or linux or IOS. The underlying OS function calls are all differently named and with different arguments and aspects. You can get *some* traction using a standard library, but even these have OS specific quirks.

    Perl was designed by a linguist, not an engineer. Larry Wall started with the notion of “what does the programmer commonly want to do?” and designed the language around that. File I/O, including directory listings, is built into the language syntax. As a result, it is the same across all architectures and perl is now the most portable language there is.

    Perl is also more expressive. As an example of above, Larry noted that human languages rely on context; for example, the word “second” has at least 3 different meanings with no synonyms, and the intended meaning has to be derived from the context. Larry implemented several contexts for the language such as scalar and array, so that the same operation does different things depending on the context. For example, using an array in scalar context implicitly uses the length of the array, not each element of the array.

    Saying that computer languages are about human factors is simply not supported by the evidence.

    Computer languages are reflections of the underlying architecture, and human factors – what the program really wants to do – is left as an exercise to the writers of libraries.

    (An example: consider the following simple perl line:

    while (reverse sort )

    “Reverse” is a perl operator that reverses the elements of an array, “sort” sorts an array, and the brackets returns an array of files that match the given pattern. This is transparent without comments if you’re familiar with perl syntax.

    As an exercise for the reader, consider the lines needed to implement this in C++ without using a library.

    1. while( reverse sort “less-than” *.c “greater-than” )

      Hackaday ate the brackets in the previous example. Using HTML escapes (with ampersands) results in a security token error (?), so here’s the full line. Mentally substitute “less-than” for the single char above the comma on your keyboard, and “greater-than” for the char above the period.

      1. Probably. I’ve encountered that message and it’s easy to fix, but unicode in general is a PITA to work with.

        https://perldoc.perl.org/perlunicode

        You can set a string as being unicode, and when it prints it won’t show that message. There’s lots of conversion options, such as for file I/O that will automatically set that flag for returned strings, that sort of thing.

        The problem with unicode is when doing something like length() and other functions, where it has to step through the entire string to count the number of characters, since characters could be multi-byte.

        My take is that it’s easier to simply turn off unicode, and deal with multiple byte chars separately by the coding algorithm. For example, if you know that your text is in English, you can do some simple substitutions (5 of them, IIRC) that will bring everything down to ASCII first, then process as normal.

  4. National Instruments Labview.

    From text-based to purely visual (dataflow-paradigm) required several lobotomies and an entirely new dictionary of profanities.

    I mean, seriously, no ability to zoom in/out, my ability to program being limited by my dexterity with a mouse and if you have any form of OCD then you can spend more time “arranging the wires” than developing the code.

    …and don’t get me started on the (lack of) help system consisting of multiple textual descriptions for a graphical-based programming language with (only a few) examples (apparently randomly scattered) throughout.

  5. There used to be an esolang listserv with most of those late 90s, early 00s designers joining in. Hope someone kept an archive somewhere. There was some really good stuff. I remember nobody ever could come up with a decent queue-based language that was fun and interesting. This was before I learned about mercury delay lines, so maybe it’s time for another try.

Leave a Reply to PWalshCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.