It sounds like the start of a joke, but what’s the difference between taking Cornell’s CS6120 online and in-person? The instructor, [Adrian Samspon] notes that the real class has deadlines, an end-of-semester project, and a discussion board that is only open to real-life students. He also notes that you only earn “imagination credits.”
Still, this is a great opportunity to essentially audit a PhD-level computer science class on a fascinating topic. The course consists of videos, papers, and open source projects using LLVM and a custom internal representation based on JSON that is made for the class. It is all open source, too. You do however need access to the papers, some of which are behind paywalls. Your local library can help if you can’t otherwise find copies of the papers.
At some point or another, many of us have tried to see how much of our digital lives could be accessed from the comfort of a terminal. We’ve tried Alpine for email, W3M for web browsing, and even watched Star Wars via telnet. But, in the increasingly socially-distant world we find ourselves in today, we find ourselves asking: what about video calling?
As you may have guessed, [Andy]’s solution replaces the conventional video stream we’re all used to with realtime animated ASCII art. The system works by capturing a video stream from a webcam, “compressing” each pixel by converting it into an ASCII character, and stuffing the entire frame into a TCP packet. Each client is connected to a server (meeting room?) which coordinates the packets, sending them back and forth appropriately.
As impressive as it is impractical, the only area in which the project lacks is in audio. [Andy] suggests using Discord to solve that, but here’s hoping we see subtitles in version 2! Will AsciiZOOM be replacing our favorite videoconferencing suite any time soon? No. Are we glad it exists? You betcha.
We’ve become used to finding models on websites such as Thingiverse and downloading them to print. After all, whose hackerspace doesn’t have a pile of novelty prints? How about printing them on paper? For the plotter enthusiast that can be particularly annoying. Never fear, [Trammell Hudson] is here with an online 3D to 2D converter just for plotters. [Trammell’s] creation makes a vector image suitable for a plotter while eliminating spurious behind-the-scenes lines.
Plotter drawings are the pen-and-paper equivalent of a vector CRT display, in which the graphics are printed as continuous strokes. Rendering a 3D model as a wireframe for a plotter requires the removal of any pen strokes that comes from the 3D space behind the surface in view. Loading various models into the web page seemed to do a pretty good job of this, though the ubiquitous Benchy 3d printer test model lived up to its billing as a torture test in taking several minutes to render.
As anyone who has followed the #PlotterTwitter social media hashtag will know, there is a considerable community of pen plotter enthusiasts who are pushing the boundaries of what their machines can do. [Trammell] has posted his plotter producing some of the work created with this tool, and we can see that it’s likely to work better with lower-poly models.
We’ve featured a lot of plotters over the years as they seem to be a popular project. If you’d like one then they can be made from the most available parts, including those scavenged from scrap DVD drives, or printers.
Certain old computers — most frequently those using the RCA 1802 — were fond of using an early form of byte-code interpreter for programs, especially games. The interpreter, CHIP-8, was very simple to create but offered high-level features that were tedious to recreate in the native assembly language. Because there are a fair number of simple games written in CHIP-8, there are of course, emulators for it, and [River Gillis] decided to look inside the CHIP-8 byte code interpreter.
Part of the power of CHIP-8 was it only had 35 virtual instructions. That was important when you were trying to shoehorn a game and the interpreter into a very small memory. Remember, in those days 1K of memory wasn’t an unusual number, although the prototypical CHIP-8 host would have 4K.
Lacking a DVD drive, [jg] was watching a TV series in the form of a bunch of .avi video files. Of course, when every episode contains a full intro, it is only a matter of time before that gets too annoying to sit through.
Chapter breaks reliably inserted around the intro, even when it doesn’t always occur in the same place.
The usual method of skipping the intro on a plain video file is a simple one:
Manually drag the playback forward past the intro.
Oops that’s too far, bring it back.
Ugh reversed it too much, nudge it forward.
Okay, that’s good.
[jg] was certain there was a better way, and the solution was using audio fingerprinting to insert chapter breaks. The plain video files now have a chapter breaks around the intro, allowing for easy skipping straight to content. The reason behind selecting this method is simple: the show intro is always 52 seconds long, but it isn’t always in the same place. The intro plays somewhere within the first two to five minutes of an episode, so just skipping to a specific timestamp won’t do the trick.
The first job is to extract the audio of an intro sequence, so that it can be used for fingerprinting. Exporting the first 15 minutes of audio with ffmpeg easily creates a wav file that can be trimmed down with an audio editor of choice. That clip gets fed into the open-source SoundFingerprinting library as a signature, then each video has its audio track exported and the signature gets identified within it. SoundFingerprinting therefore detects where (down to the second) the intro exists within each video file.
Marking out chapter breaks using that information is conceptually simple, but ends up being a bit roundabout because it seems .avi files don’t have a simple way to encode chapters. However, .mkv files are another matter. To get around this, [jg] first converts each .avi to .mkv using ffmpeg then splices in the chapter breaks with mkvmerge. One important element is that the reformatting between .avi and .mkv is done without completely re-encoding the video itself, so it’s a quick process. The result is a bunch of .mkv files with chapter breaks around the intro, wherever it may be!
The script is available here for anyone to play with, and the project page is a good learning reference because [jg] kindly provides all the command-line options used for each tool. Interested in using audio fingerprinting in your own projects? Remember to also check out Olaf, the Overly Lightweight Acoustic Fingerprinting method that can be implemented in embedded systems and web browsers.
[Niklas Roy] rolled his own photo diary, because he found the core functionality of something like instagram attractive, but didn’t want the social network baggage that it came with. His simple system is called my own insta ;) and it consists of some javascript and PHP to create a nice progressive web app photo diary and backend that can be accessed just fine from a mobile device. It is available on GitHub for anyone interested in having their own.
This project came up because [Niklas] sometimes found himself working on small projects or experiments that aren’t destined for proper documentation, but nevertheless could benefit from being shared as a photo with a short description. This dovetails with what many social networks offer, except that those platforms also come with other aspects [Niklas] doesn’t particularly want. His online photo diary solves this by having a simple back end with which he can upload, sort, and caption photos in an easy way even from a mobile device.
We often hear the term “Turing-complete” without giving much thought as to what the implications might be. Technically Microsoft PowerPoint, Portal 2, and Magic: the Gathering all are Turing-complete, what of it? Yet, each time someone embarks on an incredible quest of perseverance and creates a computer in one of these mediums, we stand back in awe.
[Nicolas Loizeau] is one such individual who has created a computer in Conway’s Game of Life. Unlike electricity, the Game of Life uses gliders as signals. Because two orthogonal gliders can cancel each other out or form a glider eater if they intersect with a good phase shift, the basic logic gates can be formed from these interactions. This means the space between gates is crucial as signals need to be in phase alignment. The basic building blocks are a period-60 gun, a 90-degree glider reflector, a glider duplicator, and a glider eater.
The actual architecture has an 8-bit data bus, a 64-byte memory with two read ports, a ROM with 21 bits per line, and a one-hot encoded ALU supporting 8 different operations. Instructions have a 4-bit opcode which is decoding in a few different instructions. The clock is four loops, formed by the glider reflectors as the glider beams rotate. This gives the computer four stages: execution, writing, increment PC, and write PC to memory.
The Game of Life is an excellent example of Cellular Automaton (CA). There are several other types of CA’s and the history behind them is fascinating. We’ve covered this field before and delved into this beautiful fringe of computer science. Check out the video below to truly get a sense of the scale of the machine that [Nicolas] has devised.