partially finished print, with the embedded animation

Flip Book Animations On The Inside Of 3D Prints

We’ve all seen 3D printed zoetropes, and drawn flip book animations in the corner of notebooks. The shifting, fluid shape of the layers forming on a 3D printer is satisfying. And we all know the joy of hidden, nested objects.

Hackaday alumnus [Caleb Kraft] has a few art pieces that all reflect all these. He’s been making animations by recording a 3D printer. The interesting bit is that his print is made of two objects. An outer one with normal infill that gives a solid form, and a layer cake like inner one with solid infill. It’s documented in this video on YouTube.

CAD model of the stack of frames
CAD model of the stack of frames

There are lots of things to get right.  The outer object needs to print without supports. The thickness of the “layer cake” layers determines the frame rate. I had to wonder how he triggered the shutter  when the head wasn’t in the way.

His first, experimental, piece is the classic ‘bouncing ball’ animation, inside a ball, and his mature piece is Eadward Muybridge’s “The Horse, In Motion” inside a movie camera.

We’ve covered [Caleb Kraft] before, of course. His Moon On A Budget piece is wonderful.  And we’ve covered a number of 3D printer animations. and 3D zoetropes.  We particularly were drawn to this one.

Thanks [jmc] for the tip!

Continue reading “Flip Book Animations On The Inside Of 3D Prints”

Designing For The Small Grey Screen

With the huge popularity of retrocomputing and of cyberdecks, we have seen a variety of projects that use a modern computer such as a Raspberry Pi bathed in the glorious glow of a CRT being used as a monitor. The right aesthetic is easily achieved this way, but there’s more to using a CRT display than simply thinking about its resolution. Particularly a black-and-white CRT or a vintage TV has some limitations due to its operation, that call for attention to the design of what is displayed upon it. [Jordan “Ploogle” Carroll] has taken a look at this subject, using a 1975 Zenith portable TV as an example.

The first difference between a flat panel and a CRT is that except in a few cases it has a curved surface and corners, and the edges of the scanned area protrude outside the edges of the screen. Thus the usable display area is less than the total display area, meaning that the action has to be concentrated away from the edges. Then there is the effect of a monochrome display on colour choice, in other words the luminance contrast between adjacent colours must be considered alongside the colour contrast. And finally there’s the restricted bandwidth of a CRT display, particularly when it fed via an RF antenna socket, which affects how much detail it can reasonably convey. The examples used are games, and it’s noticeable how Nintendo’s design language works well with this display. We can’t imagine Nintendo games being tested on black-and-white TV sets in 2022, so perhaps this is indicative of attention paid to design for accessibility.

While they require a bit of respect due to the presence of dangerous voltages, there’s a lot of fun to be had bringing a CRT into 2022. Get one while you still can, and maybe you could have a go at a retro cyberdeck.

Twitch And Blink Your Way Through Typing With This Facial Keyboard

For those that haven’t experienced it, the early days of parenthood are challenging, to say the least. Trying to get anything accomplished with a raging case of sleep deprivation is hard enough, but the little bundle of joy who always seems to need to be in physical contact with you makes doing things with your hands nigh impossible. What’s the new parent to do when it comes time to be gainfully employed?

Finding himself in such a boat, [Fletcher]’s solution was to build a face-activated keyboard to work around his offspring’s needs. Before you ask: no, voice recognition software wouldn’t work, at least according to the sleepy little boss who protests noisy awakenings. The solution instead was to first try OpenCV and the dlib facial recognition library to watch [Fletcher] blinking out Morse code. While that sorta-kinda worked, one’s blinkers can’t long endure such a workout, so he moved on to an easier set of gestures. Mouthing Morse code covers most of the keyboard, while a combination of eye, eyebrow, and other facial twitches and tics cover the rest, with MediaPipe’s Face Mesh doing the heavy-lifting in terms of landmark detection.

The resulting facial keyboard, aptly dubbed “CheekyKeys,” performed well enough for [Fletcher] to use for a skills test during an interview with a Big Tech Company. Imagining the interviewer on the other end watching him convulse his way through the interview was worth the price of admission, and we don’t even care if it was a put-on. Video after the break.

CheekyKeys is pretty cool, doing something with a webcam and Python that we thought would have needed a dedicated AI depth camera to accomplish. But perhaps the real hack here was how [Fletcher] taught himself Morse in fifteen minutes.

Continue reading “Twitch And Blink Your Way Through Typing With This Facial Keyboard”

A 3D Printed 35mm Movie Camera

Making a camera can be as easy as taking a cardboard box with a bit of film and a pin hole, but making a more accomplished camera requires some more work. A movie camera has all the engineering challenges as a regular camera with the added complication of a continuous film transport mechanism and shutter. Too much work? Not if you are [Yuta Ikeya], whose 3D printed movie camera uses commonly-available 35 mm film stock rather than the 8 mm or 16 mm film you might expect.

3D printing might not seem to lend itself to the complex mechanism of a movie camera, however with the tech of the 2020s in hand he’s eschewed a complex mechanism in favour of an Arduino and a pair of motors. The camera is hardly petite, but is still within the size to comfortably carry on a shoulder. The film must be loaded into a pair of cassettes, which are pleasingly designed to be reversible, with either able to function as both take-up and dispensing spool.

The resulting images have an extreme wide-screen format and a pleasing artistic feel. Looking at them we’re guessing there may be a light leak or two, but it’s fair to say that they enhance the quality rather than detract from it. Those of us who dabble in movie cameras can be forgiven for feeling rather envious.

We’ve reached out to him asking whether the files might one day be made available, meanwhile you can see it in action in the video below the break.

Continue reading “A 3D Printed 35mm Movie Camera”

How Did Dolby Digital Sound Work On Film?

When we go to the cinema and see a film in 2022, it’s very unlikely that what we’re seeing will in fact be a film. Instead of large reels of transparent film fed through a projector, we’ll be watching the output of a high-quality digital projector. The advantages for the cinema industry in terms of easier distribution and consistent quality are obvious. There was a period in the 1990s though when theatres still had film projectors, but digital technology was starting to edge in for the sound. [Nava Whiteford] has found some 35mm trailer film from the 1990s, and analysed the Dolby Digital sound information from it.

The film is an interesting exercise in backward compatibility, with every part of it outside the picture used to encode information. There is the analogue sound track and two digital formats, but what we’re interested in are the Dolby Digital packets. These are encoded as patterns superficially similar to a QR code in the space between the sprocket holes.

Looking at the patent he found that they were using Reed-Solomon error correction, making it relatively easy to decode. The patent makes for fascinating reading, as it details how the data was read using early-1990s technology with each line being scanned by a linear CCD, before detailing the signal processing steps followed to retrieve the audio data. If you remember your first experience of Dolby cinema sound three decades ago, now you know how the system worked.

The film featured also had an analogue soundtrack, and if you’d like to know how they worked, we’ve got you covered!

It’s Bad Apple, But On A 32K EPROM

The Bad Apple!! video with its silhouette animation style has long been a staple graphics demo for low-end hardware, a more stylish alternative to the question “Will it run DOOM?”. It’s normal for it to be rendered onto a screen by a small microcomputer or similar but as [Ian Ward] demonstrates in an unusual project, it’s possible to display the video without any processor being involved. Instead he’s used a clever arrangement involving a 32K byte EPROM driving a HD44780-compatible parallel alphanumeric LCD display.

While 32K bytes would have seemed enormous back in the days of 8-bit computing, even when driving only a small section of an alphanumeric LCD it’s still something of a struggle to express the required graphics characters. This feat is achieved by the use of a second EPROM, which carries a look-up table.

It’s fair to say that the result which can be seen in the video below the break isn’t the most accomplished rendition of Bad Apple!! that we’ve seen, but given the rudimentary hardware upon which it’s playing we think that shouldn’t matter. Why didn’t we think of doing this in 1988!

Continue reading “It’s Bad Apple, But On A 32K EPROM”

Invisible 3D Printed Codes Make Objects Interactive

An interesting research project out of MIT shows that it’s possible to embed machine-readable labels into 3D printed objects using nothing more than an FDM printer and filament that is transparent to IR. The method is being called InfraredTags; by embedding something like a QR code or ArUco markers into an object’s structure, that label can be detected by a camera and interactive possibilities open up.

One simple proof of concept is a wireless router with its SSID embedded into the side of the device, and the password embedded into a different code on the bottom to ensure that physical access is required to obtain the password. Mundane objects can have metadata embedded into them, or provide markers for augmented reality functionality, like tracking the object in 3D.

How are the codes actually embedded? The process is straightforward with the right tools. The team used a specialty filament from vendor 3dk.berlin that looks nearly opaque in the visible spectrum, but transmits roughly 45% in IR.  The machine-readable label gets embedded within the walls of a printed object either by using a combination of IR PLA and air gaps to represent the geometry of the code, or by making a multi-material print using IR PLA and regular (non-IR transmitting) PLA. Both provide enough contrast for an IR-sensitive camera to detect the label, although the multi-material version works a little better overall. Sadly, the average mobile phone camera by itself isn’t sufficiently IR-sensitive to passively read these embedded tags, so the research used easily available cameras with no IR-blocking filters, like the Raspberry Pi NoIR.

The PDF has deeper details of the implementation for those of you who want to know more, and you can see a demonstration of a few different applications in the video, embedded below. Determining the provenance of 3D printed objects is a topic of some debate in the industry, and it’s not hard to see how technology like this could be used to covertly identify objects without compromising their appearance.

Continue reading “Invisible 3D Printed Codes Make Objects Interactive”