[Felipe Tavares] wasn’t satisfied with the boring default fonts on an HD44780-based display. And while you can play some clever tricks with user-defined characters, if you want to treat the display as an array of pixels, you’ve got to get out your scalpel and cut up a data line.
This done, it looks like [Felipe] has it working! If you can read Rust for the ESP32, he has even provided us with a working demo of the code that makes it work.
We can’t help but wonder if it’s not possible to go even lower-level and omit the HD44780 entirely. Has anyone tried driving one of these little LCD displays directly from a microcontroller, essentially implementing the HD44780 yourself?
Well, we guess it had to happen eventually — Ford is putting plans in place to make its vehicles capable of self-repossession. At least it seems so from a patent application that was published last week, which reads like something written by someone who fancies themselves an evil genius but is just really, really annoying. Like most patent applications, it covers a lot of ground; aside from the obvious capability of a self-driving car to drive itself back to the dealership, Ford lists a number of steps that its proposed system could take before or instead of driving the car away from someone who’s behind on payments.
Examples include selective disabling conveniences in the vehicle, like the HVAC or infotainment systems, or even locking the doors and effectively bricking the vehicle. Ford graciously makes allowance for using the repossessed vehicle in an emergency, and makes mention of using cameras in the vehicle and a “neural network” to verify that the locked-out user is indeed having, say, a medical emergency. What could possibly go wrong?
It’s true what they say — you never know what you can do until you try. Russell Kirsch, who developed the first digital image scanner and subsequently invented the pixel, was a firm believer in this axiom. And if Russell had never tried to get a picture of his three-month-old son into a computer back in 1957, you might be reading Hackaday in print right now. Russell’s work laid the foundation for the algorithms and storage methods that make digital imaging what it is today.
Russell A. Kirsch was born June 20, 1929 in New York City, the son of Russian and Hungarian immigrants. He got quite an education, beginning at Bronx High School of Science. Then he earned a bachelor’s of Electrical Engineering at NYU, a Master of Science from Harvard, and attended American University and MIT.
In 1951, Russell went to work for the National Bureau of Standards, now known as the National Institutes of Science and Technology (NIST). He spent nearly 50 years at NIST, and started out by working with one of the first programmable computers in America known as SEAC (Standards Eastern Automatic Computer). This room-sized computer built in 1950 was developed as an interim solution for the Census Bureau to do research (PDF).
Like the other computers of its time, SEAC spoke the language of punch cards, mercury memory, and wire storage. Russell Kirsch and his team were tasked with finding a way to feed pictorial data into the machine without any prior processing. Since the computer was supposed to be temporary, its use wasn’t as tightly controlled as other computers. Although it ran 24/7 and got plenty of use, SEAC was more accessible than other computers, which allowed time for bleeding edge experimentation. NIST ended up keeping SEAC around for the next thirteen years, until 1963.
The Original Pixel Pusher
The term ‘pixel’ is a shortened portmanteau of picture element. Technically speaking, pixels are the unit of length for digital imaging. Pixels are building blocks for anything that can be displayed on a computer screen, so they’re kind of the first addressable blinkenlights.
As the drum slowly rotated, a photo-multiplier moved back and forth, scanning the image through a square viewing hole in the wall of a box. The tube digitized the picture by transmitting ones and zeros to SEAC that described what it saw through the square viewing hole — 1 for white, and 0 for black. The digital image of Walden is 76 x 76 pixels, which was the maximum allowed by SEAC.
In in the video below, Russell discusses the idea and proves that variable pixels make a better image with more information than square pixels do, and with significantly fewer pixels overall. It takes some finagling, as pixel pairs of triangles and rectangles must be carefully chosen, rotated, and mixed together to best represent the image, but the image quality is definitely worth the effort. Following that is a video of Russell discussing SEAC’s hardware.
Russell retired from NIST in 2001 and moved to Portland, Oregon. As of 2012, he could be found in the occasional coffeehouse, discussing technology with anyone he could engage. Unfortunately, Russell developed Alzheimer’s and died from complications on August 11, 2020. He was 91 years old.
We hackers just can’t get enough of sorters for confections like Skittles and M&Ms, the latter clearly being the superior candy in terms of both sorting and snackability. Sorting isn’t just about taking a hopper of every color and making neat monochromatic piles, though. [JohnO3] noticed that all those colorful candies would make dandy pixel art, so he built a bot to build up images a Skittle at a time.
Dubbed the “Pixel8R” after the eight colors in a regulation bag of Skittles, the machine is a largish affair with hoppers for each color up top and a “canvas” below with Skittle-sized channels and a clear acrylic cover. The hoppers each have a rotating disc with a hole to meter a single Skittle at a time into a funnel which is connected to a tube that moves along the top of the canvas one column at a time. [JohnO3] has developed a software toolchain to go from image files to Skittles using GIMP and a Python script, and the image builds up a row at a time until 2,760 Skittle-pixels have been placed.
The downside: sorting the Skittles into the hoppers. [JohnO3] does this manually now, but we’d love to see a sorter like this one sitting up above the hoppers. Or, he could switch to M&Ms and order single color bags. But where’s the fun in that?
Like many other classics it’s easy to come up with ways to ruin Tetris, but hard to think of anything that will make it better. Adding more clickiness is definitely one way to improve the game, and playing Tetris on a flip-dot display certainly manages to achieve that.
The surplus flip-dot display [sinowin] used for this version of Tetris is a bit of an odd bird that needed some reverse engineering to be put to work. The display is a 7 x 30 matrix with small dots, plus a tiny green LED for each dot. Those LEDs turned out to be quite useful for replicating the flashing effect used in the original game when a row of blocks was completed, and the sound of the dots being flipped provides audio feedback. The game runs on a Teensy through a custom driver board and uses a Playstation joystick for control. The video below, in perfectly acceptable vertical format, shows the game in action and really makes us want to build our own, perhaps with a larger and even clickier flip-dot display.
The best thing about Tetris is its simplicity: simple graphics, simple controls, and simple gameplay. It’s so simple it can be played anywhere, from a smartwatch to a business card and even on a transistor tester.
If you need to add a small display to your project, you’re not going to do much better than a tiny OLED display. These tiny display are black and white, usually found in resolutions of 128×64 or some other divisible-by-two value, they’re driven over I2C, the libraries are readily available, and they’re cheap. You can’t do much better for displaying a few numbers and text than an I2C OLED. There’s a problem, though: OLEDs burn out, or burn in, depending on how you define it. What’s the lifetime of these OLEDs? That’s exactly what [Electronics In Focus] is testing (YouTube, in Russian, so click the closed captioning button).
The experimental setup for this is eleven OLED displays with 128×64 pixels with an SSD1306 controller, all driven by an STM32 over I2C. Everything’s on a breadboard, and the actual display is sixteen blocks, each lit one after another with a one-second display in between. This is to test gradually increasing levels of burnout, and from a surface-level analysis, this is a pretty good way to see if OLED pixels burn out.
After 378 days of testing, this test was stopped after there were no failed displays. This comes with a caveat: after a year of endurance testing, there were a few burnt out pixels. correlating with how often these pixels were on. The solution to this problem would be to occasionally ‘jiggle’ the displayed text around the screen, turn the display off when no one is looking at it, or alternatively write a screen saver for OLEDs. That last bit has already been done, and here are the flying toasters to prove it. This is an interesting experiment, and although that weird project you’re working on probably won’t ping an OLED for a year of continuous operation, it’s still something to think about. Video below.