Contrary to popular belief, LISP does not stand for “lots of irritating spurious parenthesis.” However, it is true that people tend to love or hate this venerable programming language. Whichever side of the fence you’re on, many of the ideas it launched decades ago have become staples of other newer languages. How much C code do you think it takes to make a functional LISP system? If you guessed more than 200, you’ll want to go look at this GitHub repository.
Actually, the code isn’t as good as the (sort of) literate programming white paper on the program, but it gives a good overview of how 200 lines of C code can produce a working LISP-like language good enough to create its own eval loop. It does lack memory handling and error detection, so if you really wanted to use it, you’d probably need to spruce it up a bit.
Is this practical? We don’t doubt the educational value, but we were really interested in how it could squeeze into a microcontroller since it is so sparse. While we haven’t tried it, we could see using a PC development tool (coded in proper LISP) to detect errors. Many embedded systems have a pretty static memory footprint, so maybe the lack of memory management would not be a major problem. Then, too, it would be pretty easy to add your own functions for explicit memory management which is probably better for a microcontroller, anyway.
On the other hand, we’ve seen small LISP interpreters before already running on micro-sized hardware. Naturally, if you can do that on an AVR, the ESP8266 can easily match that. What do you with a pint-sized, battery-operated LISP machine? Beats us. We doubt you’ll be moving your Emacs macros to one anytime soon.
Unfortunately the white paper can’t seem to distinguish between “8 bits” and “8 bytes”, which is worrying.
One wrong word in on otherwise good example for literate programming is hardly “worrying”. And it is all on github, so send him just a bug report, or fork the repository, fix it and send a pull request:
https://github.com/carld/carld.github.io/blob/master/_posts/2017-06-20-lisp-in-less-than-200-lines-of-c.md
If you really think so: do I got bad news for you…
If not: if it irritates you enough you can contact the author.
LISP and FORTH for the concise award.
Someone, Allen Kay maybe, wrote a page of Smalltalk, but it wasn’t a functioning interpreter. Of course FORTH has it’s legendary one screeners! :)
Interesting! Are you aware of the book “Build Your Own LISP” (https://www.amazon.com/dp/B00ONV8CNO/)? The author says you can do it “in under 1000 lines of [C] code”, so maybe he includes some of the other features you mention.
Interesting paper, but in this 200 lines only define the basic primitives, the virtual machine processor. One must add at least all the conditionnal and arithmetic primitives, at the end it will be a lot more than 200 lines.
I really hate the whole “lines of code” metric. I have a set of source files, all with the same exact code in them, but a very wide range of line numbers. Just look at C code written in Linux-KNF versus K&R style, you get a difference of 10% in lines between the two. Then there are all the conventions of how many statements are placed on each line, or things like printf and whether a multi-line message should be done in one printf or one printf per line. Then there are the debates on whether #ifdef blocks count as lines of code.
I prefer to measure c-code by number of semis (Semicolons), much more accurate since it measures what the compiler parses versus what a human would parse. Or I’ll do the measurement of how many lines of assembly the compiler outputs.
I tend to measure code by whether it produces usable programs. What good for is a program with thousands of lines or semicolons if the result is but Microsoft quality?
Well, you are talking about code-quality, not code size. The two are not related in any way.
I’ve seen 50 line and 500,000 line programs that I’d call near-perfect and some equal-sized ones that I’d label as “looks like the author just piped /dev/random directly into the compiler until an executable popped out”.
And even if you mean execution speed, I’ve seen programs that were originally 30-40 lines long that ran slowly but tacking on thousands more lines made the thing 100x faster and more reliable. That was probably at the same rate that I saw large programs get faster after reducing the number of lines.
Case in point, I had built a configuration file parser. The first version was a lean 25-line program utilizing a massive regex to parse inputs and ran slow as hell. I then re-write it as a ~2000 line block of case statements, but completed in about 3% of the time as the first version.
But overall, that is why I also measure programs on the number of lines of assembly the compiler pukes out before it get compiled into executable code. The two versions of that configuration file parser I wrote produced a very different number of lines in assembly with the ~2000 line version having about 1/20th of the lines when you expanded the loops of the first one.
Sorry, madame, you missed the point.
“Measuring code” is an arbitrary exercise if that measurement tells you nothing about whether the code creates usable programs, good programs, readable code, servicable code, well documented code, performant results – or anything. It is just counting characters.
Using semicolons to count the size of a script makes Python the most efficient language ever :)
Well, obviously, the metric would be changed depending on that language’s statement terminator.
CAR CDR
My other car is a cdr
If you want a real second car, get a cadr