In Search Of The First Comment

Are you writing your code for humans or computers? I wasn’t there, but my guess is that at the dawn of computing, people thought that they were writing for the machines. After all, they were writing in machine language, and whatever bits they flipped into the electronic brain stayed in the electronic brain, unless punched out on paper tape. And the commands made the machine do things, not other people. Code was written strictly for computers.

Modern programming practice, on the other hand, is aimed firmly at people. Variable and function names are chosen to be long and to describe what they contain or do. “Readability” of code is a prized attribute. Indeed, sometimes the fact that it does the right thing at all almost seems to be an afterthought. (I kid!)

Somewhere along this path, there was an important evolutionary step, like the first fish using its flippers to walk on land. Comments were integrated into programming languages, formalizing the notes that coders of old surely wrote by hand in the margins of the paper first-drafts before keying it in. So I went looking for the missing link: the first computer language, and ideally the first program, with comments. I came up empty handed.

Or rather full handed. Every computer language that I could find had comments from the beginning. FORTRAN had comments, marked by a “C” as the first character in a line. APL had comments, marked by the bizarro rune ⍝. Even the custom language written for the Apollo 11 guidance computers had comments — the now-commonplace “#”. I couldn’t find an early programming language without comments.

My guess is that the first language with a comment must have been an assembly language, because I don’t know of any machines with a native comment instruction. (How cool and frivolous would that be?)

Assemblers simply translate mnemonic names to their machine instruction counterparts, but this gives them the important freedom to ignore anything starting with, traditionally, a semicolon. Even though you’re just transferring the contents of register X to the memory location pointed to in register Y, you can write that you’re “storing the height above ground (meters)” in the comments.

The crucial evolutionary step, though, is saving the comments along with the code. Simply ignoring everything that comes after the semicolon and throwing it away doesn’t count. Does anyone know? What was the first code to include comments as part of the code itself, and not simply as marginalia?

Drops Of Jupyter Notebooks: How To Keep Notes In The Information Age

Our digital world is so much more interactive than the paper one it has been replacing. That becomes very obvious in the features of Jupyter Notebooks. The point is to make your data beautiful, organized, interactive, and shareable. And you can do all of this with just a bit of simple coding.

We already leveraged computer power by moving from paper spreadsheets to digital spreadsheets, but they are limited. One thing I’ve seen over and over again — and occasionally been guilty of myself — is spreadsheet abuse. That is, using a spreadsheet program to do something I probably ought to write a program to do. For those times that you want something quick but want something more than a spreadsheet, you should check out Jupyter Notebooks. The system is most commonly associated with Python, but it isn’t Python-specific. There are over 100 languages supported — many community-developed. You can even install a C++ interpreter backend for it. Because of the client/server architecture, it is very simple to share notebooks with other users.

You can — in theory — use Jupyter for anything you could use Python for. In practice, it seems to get a lot of workout with people analyzing large data sets, doing machine learning, and similar tasks.

The Good: Simple, Powerful, Extensible

The idea is simple. Think of a Markdown-enabled web page that can connect to a backend (a kernel, in Jupyter-speak). The backend can run on your machine or remotely and will support some kind of language — often Python. The document has cells that line up vertically (like a single wide spreadsheet column). For example, here’s a simple notebook I created to explain how a bunch of sine waves add up to a square wave:

Continue reading “Drops Of Jupyter Notebooks: How To Keep Notes In The Information Age”

Lisp In 200 Lines

Contrary to popular belief, LISP does not stand for “lots of irritating spurious parenthesis.” However, it is true that people tend to love or hate this venerable programming language. Whichever side of the fence you’re on, many of the ideas it launched decades ago have become staples of other newer languages. How much C code do you think it takes to make a functional LISP system? If you guessed more than 200, you’ll want to go look at this GitHub repository.

Actually, the code isn’t as good as the (sort of) literate programming white paper on the program, but it gives a good overview of how 200 lines of C code can produce a working LISP-like language good enough to create its own eval loop. It does lack memory handling and error detection, so if you really wanted to use it, you’d probably need to spruce it up a bit.

Continue reading “Lisp In 200 Lines”

All About Eve

Most programming languages today look fairly similar. There’s small differences, of course (Python using spaces, Ruby and Perl have some odd-looking constructs). In the 1960s and 1970s, though, a lot of programming languages were pretty cryptic. Algol, APL, and LISP are great examples of unusual looking programming languages. Even FORTRAN and PL/1 were hard to read. RPG and COBOL were attempts to make programming more accessible, although you could argue that neither of them took over the world. Most programming languages today have more similarity to FORTRAN than either of those two languages.

A new programming language, Eve, claims to be based on years of research in programming from a human perspective instead of from the computer’s. The result is a language that works by pattern matching instead of the usual flow of control. It is also made to live inside of Markdown documents that can serve as documentation. You can see a video about Eve, below.

Neither of these are totally new ideas. SNOBOL, AWK, and Prolog all have some pattern-matching involved. [Donald Knuth] was promoting literate programming back in the 1980s. However, Eve understands modern constructs like web browsers.

Continue reading “All About Eve”

Learn To Program With Literate Programming

My heyday in programming was about five years ago, and I’ve really let my skills fade. I started finding myself making excuses for my lack of ability. I’d tackle harder ways to work around problems just so I wouldn’t have to code. Worst of all, I’d find myself shelving projects because I no longer enjoyed coding enough to do that portion. So I decided to put in the time and get back up to speed.

Normally, I’d get back into programming out of necessity. I’d go on a coding binge, read a lot of documentation, and cut and paste a lot of code. It works, but I’d end up with a really mixed understanding of what I did to get the working code. This time I wanted to structure my learning so I’d end up with a more, well, structured understanding.

However, there’s a problem. Programming books are universally boring. I own a really big pile of them, and that’s after I gave a bunch away. It’s not really the fault of the writer; it’s an awkward subject to teach. It usually starts off by torturing the reader with a chapter or two of painfully basic concepts with just enough arcana sprinkled in to massage a migraine into existence. Typically they also like to mention that the arcana will be demystified in another chapter. The next step is to make you play typist and transcribe a big block of code with new and interesting bits into an editor and run it. Presumably, the act of typing along leaves the reader with such a burning curiosity that the next seventeen pages of dry monologue about the thirteen lines of code are transformed into riveting prose within the reader’s mind. Maybe a structured understanding just isn’t worth it.

I wanted to find a new way to study programming. One where I could interact with the example code as I typed it. I wanted to end up with a full understanding before I pressed that run button for the first time, not after.

When I first read about literate programming, my very first instinct said: “nope, not doing that.” Donald Knuth, who is no small name in computing, proposes a new way of doing things in his Literate Programming. Rather than writing the code in the order the compiler likes to see it, write the code in the order you’d like to think about it along with a constant narrative about your thoughts while you’re developing it. The method by which he’d like people to achieve this feat is with the extensive use of macros. So, for example, a literate program would start with a section like this:

Continue reading “Learn To Program With Literate Programming”