Improved 3D Scanning Rig Adds Full-Sized Camera Support

There are plenty of reasons to pick up or build a 3D scanner. Modeling for animation or special effects, reverse engineering or designing various devices or products, and working with fabrics and clothing are all well within the wide range of uses for these tools. [Vojislav] built one a few years ago which used an array of cameras to capture 3D information but the Pi camera modules used in this build limited the capabilities of the scanner in some ways. [Vojislav]’s latest 3D scanner takes a completely different approach by using a single high-quality camera instead.

The new 3D scanner is built to carry a full-size DSLR camera, its lens, and a light. Much more similarly to how a 3D printer works, the platform moves the camera around the object in programmable steps for the desired 3D scan. The object being scanned sits on a rotating plate as well, allowing for the entire object to be scanned without needing to move the camera through a full 180° in two axes. The scanner can also be used for scanning more 2D objects while capturing information about texture, such as various textiles.

For anyone looking to reproduce something like this, [Vojislav] has made all of the plans for this build available on the project’s GitHub page including some sample gcode to demonstrate the intended use for the scanner. On the other hand, if you’re short the often large amount of funding required to get a DSLR camera, his older 3D scanner is still worth taking a look at as well.

Continue reading “Improved 3D Scanning Rig Adds Full-Sized Camera Support”

This Week In Security: TunnelVision, Scarecrows, And Poutine

There’s a clever “new” attack against VPNs, called TunnelVision, done by researchers at Leviathan Security. To explain why we put “new” in quotation marks, I’ll just share my note-to-self on this one written before reading the write-up: “Doesn’t using a more specific DHCP route do this already?” And indeed, that’s the secret here: in routing, the more specific route wins. I could not have told you that DHCP option 121 is used to set extra static routes, so that part was new to me. So let’s break this down a bit, for those that haven’t spent the last 20 years thinking about DHCP, networking, and VPNs.

So up first, a route is a collection of values that instruct your computer how to reach a given IP address, and the set of routes on a computer is the routing table. On one of my machines, the (slightly simplified) routing table looks like:

# ip route
default via 10.0.1.1 dev eth0
10.0.1.0/24 dev eth0

The first line there is the default route, where “default” is a short-hand for 0.0.0.0/0. That indicate a network using the Classless Inter-Domain Routing (CIDR) notation. When the Internet was first developed, it was segmented into networks using network classes A, B, and C. The problem there was that the world was limited to just over 2.1 million networks on the Internet, which has since proven to be not nearly enough. CIDR came along, eliminated the classes, and gave us subnets instead.

In CIDR notation, the value after the slash is commonly called the netmask, and indicates the number of bits that are dedicated to the network identifier, and how many bits are dedicated to the address on the network. Put more simply, the bigger the number after the slash, the fewer usable IP addresses on the network. In the context of a route, the IP address here is going to refer to a network identifier, and the whole CIDR string identifies that network and its size.

Back to my routing table, the two routes are a bit different. The first one uses the “via” term to indicate we use a gateway to reach the indicated network. That doesn’t make any sense on its own, as the 10.0.1.1 address is on the 0.0.0.0/0 network. The second route saves the day, indicating that the 10.0.1.0/24 network is directly reachable out the eth0 device. This works because the more specific route — the one with the bigger netmask value, takes precedence.

The next piece to understand is DHCP, the Dynamic Host Configuration Protocol. That’s the way most machines get an IP address from the local network. DHCP not only assigns IP addresses, but it also sets additional information via numeric options. Option 1 is the subnet mask, option 6 advertises DNS servers, and option 3 sets the local router IP. That router is then generally used to construct the default route on the connecting machine — 0.0.0.0/0 via router_IP.

Remember the problem with the gateway IP address belonging to the default network? There’s a similar issue with VPNs. If you want all traffic to flow over the VPN device, tun0, how does the VPN traffic get routed across the Internet to the VPN server? And how does the VPN deal with the existence of the default route set by DHCP? By leaving those routes in place, and adding more specific routes. That’s usually 0.0.0.0/1 and 128.0.0.0/1, neatly slicing the entire Internet into two networks, and routing both through the VPN. These routes are more specific than the default route, but leave the router-provided routes in place to keep the VPN itself online.

And now enter TunnelVision. The key here is DHCP option 121, which sets additional CIDR notation routes. The very same trick a VPN uses to override the network’s default route can be used against it. Yep, DHCP can simply inform a client that networks 0.0.0.0/2, 64.0.0.0/2, 128.0.0.0/2, and 192.0.0.0/2 are routed through malicious_IP. You’d see it if you actually checked your routing table, but how often does anybody do that, when not working a problem?

There is a CVE assigned, CVE-2024-3661, but there’s an interesting question raised: Is this a vulnerability, and in which component? And what’s the right solution? To the first question, everything is basically working the way it is supposed to. The flaw is that some VPNs make the assumption that a /1 route is a bulletproof way to override the default route. The solution is a bit trickier. Continue reading “This Week In Security: TunnelVision, Scarecrows, And Poutine”

A pair of hands holds a digital camera. "NUCA" is written in the hood above the lens and a black grip is on the right hand side of the device (left side of image). The camera body is off-white 3D printed plastic. The background is a pastel yellow.

AI Camera Only Takes Nudes

One of the cringier aspects of AI as we know it today has been the proliferation of deepfake technology to make nude photos of anyone you want. What if you took away the abstraction and put the faker and subject in the same space? That’s the question the NUCA camera was designed to explore. [via 404 Media]

[Mathias Vef] and [Benedikt Groß] designed the NUCA camera “with the intention of critiquing the current trajectory of AI image generation.” The camera itself is a fairly unassuming device, a 3D-printed digital camera (19.5 × 6 × 1.5 cm) with a 37 mm lens. When the camera shutter button is pressed, a nude image is generated of the subject.

The final image is generated using a mixture of the picture taken of the subject, pose data, and facial landmarks. The photo is run through a classifier which identifies features such as age, gender, body type, etc. and then uses those to generate a text prompt for Stable Diffusion. The original face of the subject is then stitched onto the nude image and aligned with the estimated pose. Many of the sample images on the project’s website show the bias toward certain beauty ideals from AI datasets.

Looking for more ways to use AI with cameras? How about this one that uses GPS to imagine a scene instead. Prefer to keep AI out of your endeavors to invade personal space? How about building your own TSA body scanner?

 

Amazon’s ‘Just Walk Out’ Shopping Is Out, Moves To Dash Carts At Its Grocery Stores

After a few years of Amazon promoting a grocery shopping experience without checkout lines and frustrating self-checkout experiences, it is now ditching its Just Walk Out technology. Conceptualized as a store where you can walk in, grab the items you need and walk out with said items automatically charged to your registered payment method, it never really caught much traction. More recently it was revealed that the technology wasn’t even as automated as portrayed, with human workers handling much of the tedium behind the scenes. This despite claims made by Amazon that it was all powered by deep machine learning and generative AI.

An Amazon Dash Cart's user interface, with scanner and display. (Credit: Amazon)
An Amazon Dash Cart’s user interface, with scanner and display. (Credit: Amazon)

Instead of plastering the ceilings of stores full with cameras, it seems that Amazon instead wishes to focus on smart shopping carts that can keep track of what has been put inside them. These so-called Dash Carts are equipped with cameras and other sensors to scan barcodes on items, as well as weigh unlabeled items (like fruit), making them into somewhat of a merging of scales at the vegetable and fruit section of stores today, and the scanning tools offered at some grocery stores to help with self-checkout.

As the main problem with the Just Walk Out technology was that it required constant (700 out of 1,000 sales in 2022) human interaction, it will be interesting to see whether the return to a more traditional self-service and self-checkout model (albeit with special Dash Lanes) may speed things along. Even so, as Gizmodo notes, Amazon will still keep the Just Walk Out technology running across locations in the UK and elsewhere. Either this means the tech isn’t fully dead yet, or we will see a revival at some point in time.

The Long Strange Trip To US Color TV

We are always fascinated when someone can take something and extend it in a clever way without changing the original thing. In the computer world, that’s old hat. New computers improve, but can usually run old software. In the real world, the addition of stereo to phonograph records and color to photography come to mind.

But there are few stories as strange or wide-ranging as the path to provide color TV. And it had to be done in a way that a color set could still get a black and white picture and black and white sets could still watch a color signal without color. You’d think there would be a “big bang” moment where color TV burst on the scene — no pun involving color burst intended. But there wasn’t. Instead, there was a long, twisted path with many competing interests and ideas to go from a world in black and white to one tinted with color phosphor.

Background

In 1928, Science and Invention magazine had plans for building a mechanical TV (although not color)

It is hard to imagine, but John Logie Baird transmitted color images as early as 1928 using a mechanical scanner. Bell Labs had a demonstration system, also mechanical, in 1929. Baird broadcast using his system in 1938. Even earlier, around 1900, there were attempts to create mechanical color image systems. Those systems were fickle or impractical, though.

Electronic scanning was the answer, but World War II froze most consumer electronics development. Baird showed an electronic color system in late 1944. However, it would be 1953 before NTSC (the National Television System Committee) adopted the standard color TV signal for the United States. It would be almost 20 years later before SECAM and PAL were standardized in other parts of the world.

Of course, these are all analog standards. The world’s gone digital now, but for nearly 50 years, analog color TV was the way people consumed TV in their homes. By 1941, NTSC produced a standard in the United States, but not for color TV. TV adoption didn’t really take off until after the war. But by 1950, the US had some 6 million TV sets.

This was both a plus — a large market — and a negative. No one wanted to obsolete those 6 million sets. Well, at least, the government regulators and consumers didn’t. But most color systems would be incompatible with those existing black and white sets. Continue reading “The Long Strange Trip To US Color TV”

Celebrating Pi Day With A Ghostly Calculator

For the last few years, [Cristiano Monteiro] has marked March 14th by building a device to calculate Pi. This year, he’s combined an RP2040 development board and a beam-splitting prism to create an otherworldly numerical display inspired by the classic Pepper’s Ghost illusion.

The build is straightforward thanks to the Cookie board from Melopero Electronics, which pairs the RP2040 with a 5×5 matrix of addressable RGB LEDs. Since [Cristiano] only needed 4×5 LED “pixels” to display the digits 0 through 9, this left him with an unused vertical column on the right side of the array. Looking to add a visually interesting progress indicator for when the RP2040 is really wracking its silicon brain for the next digit of Pi, he used it to show a red Larson scanner in honor of Battlestar Galactica.

With the MicroPython code written to calculate Pi and display each digit on the array, all it took to complete the illusion was the addition of a glass prism, held directly over the LED array thanks to a 3D-printed mounting plate. When the observer looks through the prism, they’ll see the reflection of the display seemingly floating in mid-air, superimposed over whatever’s behind the glass. It’s a bit like how the Heads Up Display (HUD) works on a fighter jet (or sufficiently fancy car).

Compared to his 2023 entry, which used common seven-segment LED displays to show off its fresh-baked digits of Pi, we think this new build definitely pulls ahead in terms of visual flair. However, if we had to pick just one of [Cristiano]’s devices to grace our desk, it would still have to be his portable GPS time server.

Continue reading “Celebrating Pi Day With A Ghostly Calculator”

Illustrated Kristina with an IBM Model M keyboard floating between her hands.

Keebin’ With Kristina: The One With The Pocket Cyberdeck

When you find something you love doing, you want to do it everywhere, all the time. Such is the case with [jefmer] and programming. The trouble is, there is not a single laptop or tablet out there that really deals well with direct sunlight. So, what’s a hacker to do during the day? Stay indoors and suffer?

Image by [jefmer] via Hackaday.IO
The answer is a project like Pocket Pad. This purpose-built PDA uses a Nice! Nano and a pair of two very low-power ST7302-driven monochrome displays. They have no backlight, but they update much faster than e-paper displays. According to [jefmer], the brighter the ambient light, the more readable the displays become. What more could you want? (Besides a backlight?)

The miniature PocketType 40% is a little small for touch typing, but facilitates thumbs well. [jefmer] added those nice vinyl transfer legends and sealed them with clear nail polish.

All of the software including the keyboard scanner is written in Espruino, which is an implementation of JavaScript that targets embedded devices. Since it’s an interpreted language, [jefmer] can both write and execute programs directly on the Pocket Pad, using the bottom screen for the REPL. I’d sure like to have one of these in my pocket!
Continue reading “Keebin’ With Kristina: The One With The Pocket Cyberdeck”