Archiving Data On Paper Using 2D Images

It seems like only yesterday we covered a project using QR codes to archive data on paper (OK, it was last Thursday), so here’s another way to do it, this time with a dedicated codec using the full page. Optar or OPTical ARchiver is a project capable of squeezing a whopping 200 Kb of data onto a single A4 sheet of paper, with writing and reading achieved with a standard laser printer and a scanner. It’s a bit harder than you might think to get that much data on the page, given that even a 600 DPI printer can’t reliably place every dot each time. Additionally, paper is rarely uniform at the microscopic scale, so Optar utilizes a forward error-correcting coding scheme to cater for a little irregularity in both printing and scanning.

The error-correcting scheme selected was an Extended Golay code (24, 12, 8),  which, interestingly, was also used for image transmission by the NASA Voyager 1 and 2 missions. In information theory terms, this scheme has a minimum Hamming Distance of 8, giving detection of up to seven bit errors. This Golay code implementation is capable of correcting three-bit errors in each 24-bit block, with 12 bits available for payload. That’s what the numbers in those brackets mean.

Another interesting problem is paper stretch during printing. A laser printer works by feeding the paper around rollers, some of which are heated. As a printer wears or gets dirty, the friction coefficient along the rollers can vary, leading to twisting and stretching of the paper during the printing process. Water absorbed by the paper can also lead to distortion. To compensate for these effects, Optar regularly inserts calibration targets throughout the bit image, which are used to locally resynchronize the decoding process as the image is processed. This is roughly similar to how the alignment patterns work within larger QR codes. Finally, similar to the position detection targets (those square bits) in QR codes, Optar uses a two-pixel-wide border around the bit image. The border is used to align to the corners well enough to locate the rows of bits to be decoded.

In the distant past of last week, we covered a similar project that uses QR codes. This got us thinking about how QR codes work, and even if encoding capacity can be increased using more colors than just black and white?

Thanks to [Petr] for the tip!

Create Custom Gridfinity Boxes Using Images Of Tools

Exhibit A: A standard-issue banana.

We love it when a community grabs hold of an idea and runs wild with it despite obvious practicality issues. Gridfinity by YouTuber [Zach Freedman] is one of those concepts. For the unaware, this is a simple storage system standard, defining boxes to hold your things. These boxes can be stacked and held in place in anything from a desk drawer to hanging off the side of a 3D printer. [Georgs Lazdāns] is one such Gridfinity user who wanted to create tool-specific holders without leaving the sofa. To do so, they made a web application using node.js and OpenCV to extract outlines for tools (or anything else) when photographed on a blank sheet of paper.

The OpenCV stack assumes that the object to be profiled will be placed on a uniformly colored paper with all parts of its outline visible. The first part of the stack uses a bilateral filter to denoise the image whilst keeping edge details.

Make a base, then add a banana. Easy!

Next, the image is converted to greyscale, blurred, and run through an adaptive threshold. This converts the image to monochrome, again preserving edge details. Finally, the Canny algorithm pulls out the paper contour. The object outline can be given an accurate scale with the paper contour and paper size specified. The second part of the process works similarly to extract the object outline. The second contour should follow the object pretty accurately. If it doesn’t, it can be manually tweaked in the editor. Once a contour is captured, it can be used to modify a blank Gridfinity base in the model editor.

Continue reading “Create Custom Gridfinity Boxes Using Images Of Tools”

OpenSCAD Library Creates QR Codes On The Fly

If you’ve been reading Hackaday for awhile, you’ll know we’re big fans of OpenSCAD around these parts. There’s a number of reasons it’s a tool we often reach for, but certainly one of the most important ones is its parametric nature. Since you’re already describing the object you want to generate with code and variables, it’s easy to do things like generate an arbitrary number of cloned objects by using a for loop.

There are a number of fantastic OpenSCAD libraries that explore this blurred line between code and physical objects, and one that recently caught our eye is scadqr from [xypwn]. The description says it lets you “Effortlessly generate QR codes directly in OpenSCAD”, and after playing around with it for a bit, we have to agree.

Continue reading “OpenSCAD Library Creates QR Codes On The Fly”

Online Game Becomes Unexpected PixelFlut

Blink and you could have missed it, but a viral sensation for a few weeks this summer was One Million Checkboxes, a web page with as you might expect, a million checkboxes. The cool thing about it was that it was interactive, so if you checked a box on your web browser, everyone else seeing that box also saw it being checked. You could do pixel art with it, and have some fun. While maintaining it, its author [eieio] noticed something weird, a URL was appearing in the raw pixel data. Had he been hacked? Investigation revealed something rather more awesome.

The display of checkboxes was responsive rather than fixed-width, on purpose to stop people leaving objectionable content. Any pixel arrangement would only appear as you made it to someone viewing with exactly the same width of checkboxes. But still, the boxes represented a binary bitfield, so of course people saw it and had fun hacking. The URLs appeared because they were ASCII encoded in the binary, and were left on purpose as a message to the developer inviting him to a forum.

On it he found a disparate group of teen hackers who’d formed a community having fun turning the game into their own version of a Pixelflut. If you’ve not seen the game previously, imagine a screen on which all pixels are individually addressable over the internet. Place it in a hackerspace or in the bar at a hacker camp, and of course the coders present indulge in a bit of competitive pixel-spamming to create a colorful and anarchic collaborative artwork. In this case as well as artwork they’d encoded the forum link in several ways, and had grown a thriving underground community of younger hackers honing their craft. As [eieio] did, we think this is excellent, and if any of the checkbox pixelflutters are reading this, we salute you!

Before he eventually took the site down he removed the rate limit for a while to let them really go to town, and predictably, they never gave up on the opportunity, and didn’t let him down.

Some people would call the activity discussed here antisocial, but in particular we agree with the final point in the piece. Young hackers like this don’t need admonishment, they need encouragement, and he’s done exactly the right thing. If you want to read more about Pixelflut meanwhile, we’ve been there before.

Simulating Air Flow For 3D Printing

You’ve probably heard that a 3D printer is capable of producing its own replacement parts. Sometimes, that even includes upgraded or improved versions of the parts it was originally built with. But sometimes, it’s hard to figure out what improved really means. Think about air ducts that cool the part after printing. In theory, it should be easy to design a new duct. But how does it perform? Empirical testing can be difficult, but [Mike] shows how you can simulate the airflow so you can test design changes and validate assumptions before you print the actual part.

Of course, this wouldn’t only apply to printer ducts. You might also get some tips if you want to model airflow for PC cooling, hot air soldering, or other air-related projects. The free version of the software has some limitations, but it was surprisingly capable.

We also enjoyed how [Mike] used fluid to visualize the actual patterns and compared it to the simulation. The trick is using a compound from a kid’s science project kit, and it seems to work very well. Of course, you could just grab your smartphone. This might be worth thinking about if you are building a laser cutter air assist, too.

Continue reading “Simulating Air Flow For 3D Printing”

Running Stock MS-DOS On A Modern ThinkPad

It might seem like the days of MS-DOS were a lifetime ago because…well, they basically were. Version 6.22 of the venerable operating system, the last standalone release, came out back in 1994. That makes even the most recent version officially 30 years old. A lot has changed in the computing world since that time, so naturally trying to run such an ancient OS on even a half-way modern machine would be a waste of time. Right?

As it turns out, getting MS-DOS 6.22 running on a modern computer isn’t nearly as hard as you’d think. In fact, it works pretty much perfectly. Assuming, that is, you pick the right machine. [Yeo Kheng Meng] recently wrote in to share his experiments with running the final DOS release on his Intel-powered ThinkPad X13 from 2020, and the results are surprising to say the least.

To be clear, we’re not talking about some patched version of DOS here. There’s no emulator at work either. Granted [Yeo] did embrace a few modern conveniences, such as using a USB floppy drive emulator to load the disk images instead of fiddling with actual floppies, and installing DOS onto an external drive so as not to clobber his actual OS on the internal NVME drive. But other than that, the installation of DOS on the ThinkPad went along just as it would have in the 1990s.

Continue reading “Running Stock MS-DOS On A Modern ThinkPad”

Meta Doesn’t Allow Camera Access On VR Headsets, So Here’s A Workaround

The cameras at the front of Meta’s Quest VR headsets are off-limits to developers, but developer [Michael Gschwandtner] created a workaround (Linkedin post) and shared implementation details with a VR news site.

The view isn’t a pure camera feed (it includes virtual and UI elements) but it’s a clever workaround.

The demo shows object detection via MobileNet V2, which we’ve seen used for machine vision on embedded systems like the Raspberry Pi. In this case it is running locally on the VR headset, automatically identifying objects even though the app cannot directly access the front-facing cameras to see what’s in front of it.

The workaround is conceptually simple, and leverages the headset’s ability to cast its video feed over Wi-Fi to other devices. This feature is normally used for people to share and spectate VR gameplay.

First, [Gschwandtner]’s app sets up passthrough video, which means that the camera feed from the front of the headset is used as background in VR, creating a mixed-reality environment. Then the app essentially spawns itself a Chromium browser, and casts its video feed to itself. It is this video that is used to — in a roundabout way — access what the cameras see.

The resulting view isn’t really direct from the cameras, it’s akin to snapshotting a through-the-headset view which means it contains virtual elements like the UI. Still, with passthrough turned on it is a pretty clever workaround that is contained entirely on-device.

Meta is hesitant to give developers direct access to camera views on their VR headset, and while John Carmack (former Meta consulting CTO) thinks it’s worth opening up and can be done safely, it’s not there yet.