Power Supply With Benchtop Features Fits In Your Pocket

[CentyLab]’s PocketPD isn’t just adorably tiny — it also boasts some pretty useful features. It offers a lightweight way to get a precisely adjustable output of 0 to 20 V at up to 5 A with banana jack output, integrating a rotary encoder and OLED display for ease of use.

PocketPD leverages USB-C Power Delivery (PD), a technology with capabilities our own [Arya Voronova] has summarized nicely. In particular, PocketPD makes use of the Programmable Power Supply (PPS) functionality to precisely set and control voltage and current. Doing this does require a compatible USB-C charger or power bank, but that’s not too big of an ask these days.

Even if an attached charger doesn’t support PPS, PocketPD can still be useful. The device interrogates the attached charger on every bootup, and displays available options. By default PocketPD selects the first available 5 V output mode with chargers that don’t support PPS.

The latest hardware version is still in development and the GitHub repository has all the firmware, which is aimed at making it easy to modify or customize. Interested in some hardware? There’s a pre-launch crowdfunding campaign you can watch.

AI Face Anonymizer Masks Human Identity In Images

We’re all pretty familiar with AI’s ability to create realistic-looking images of people that don’t exist, but here’s an unusual implementation of using that technology for a different purpose: masking people’s identity without altering the substance of the image itself. The result is the photo’s content and “purpose” (for lack of a better term) of the image remains unchanged, while at the same time becoming impossible to identify the actual person in it. This invites some interesting privacy-related applications.

Originals on left, anonymized versions on the right. The substance of the images has not changed.

The paper for Face Anonymization Made Simple has all the details, but the method boils down to using diffusion models to take an input image, automatically pick out identity-related features, and alter them in a way that looks more or less natural. For this purpose, identity-related features essentially means key parts of a human face. Other elements of the photo (background, expression, pose, clothing) are left unchanged. As a concept it’s been explored before, but researchers show that this versatile method is both simpler and better-performing than others.

Diffusion models are the essence of AI image generators like Stable Diffusion. The fact that they can be run locally on personal hardware has opened the doors to all kinds of interesting experimentation, like this haunted mirror and other interactive experiments. Forget tweaking dull sliders like “brightness” and “contrast” for an image. How about altering the level of “moss”, “fire”, or “cookie” instead?

The Constant Monitoring And Work That Goes Into JWST’s Optics

The James Webb Space Telescope’s array of eighteen hexagonal mirrors went through an intricate (and lengthy) alignment and calibration process before it could begin its mission — but the process is far from being a one-and-done. Keeping the telescope aligned and performing optimally requires constant work from its own team dedicated to the purpose.

Alignment of the optical elements in JWST are so fine, and the tool is so sensitive, that even small temperature variations have an effect on results. For about twenty minutes every other day, the monitoring program uses a set of lenses that intentionally de-focus images of stars by a known amount. These distortions contain measurable features that the team uses to build a profile of changes over time. Each of the mirror segments is also checked by being imaged selfie-style every three months.

This work and maintenance plan pays off. The team has made over 25 corrections since its mission began, and JWST’s optics continue to exceed specifications. The increased performance has direct payoffs in that better data can be gathered from faint celestial objects.

JWST was fantastically ambitious and is extremely successful, and as a science instrument it is jam-packed with amazing bits, not least of which are the actuators responsible for adjusting the mirrors.

Here’s Code For That AI-Generated Minecraft Clone

A little while ago Oasis was showcased on social media, billing itself as the world’s first playable “AI video game” that responds to complex user input in real-time. Code is available on GitHub for a down-scaled local version if you’d like to take a look. There’s a bit more detail and background in the accompanying project write-up, which talks about both the potential as well as the numerous limitations.

We suspect the focus on supporting complex user input (such as mouse look and an item inventory) is what the creators feel distinguishes it meaningfully from AI-generated DOOM. The latter was a concept that demonstrated AI image generators could (kinda) function as real-time game engines.

Image generators are, in a sense, prediction machines. The idea is that by providing a trained model with a short history of what just happened plus the user’s input as context, it can generate a pretty usable prediction of what should happen next, and do it quickly enough to be interactive. Run that in a loop, and you get some pretty impressive clips to put on social media.

It is a neat idea, and we certainly applaud the creativity of bending an image generator to this kind of application, but we can’t help but really notice the limitations. Sit and stare at something, or walk through dark or repetitive areas, and the system loses its grip and things rapidly go in a downward spiral we can only describe as “dreamily broken”.

It may be more a demonstration of a concept than a properly functioning game, but it’s still a very clever way to leverage image generation technology. Although, if you’d prefer AI to keep the game itself untouched take a look at neural networks trained to use the DOOM level creator tools.

Nix + Automated Fuzz Testing Finds Bug In PDF Parser

[Michael Lynch]’s adventures in configuring Nix to automate fuzz testing is a lot of things all rolled into one. It’s not only a primer on fuzz testing (a method of finding bugs) but it’s also a how-to on automating the setup using Nix (which is a lot of things, including a kind of package manager) as well as useful info on effectively automating software processes.

[Michael] not only walks through how he got it all up and running in a simplified and usefully-portable way, but he actually found a buffer overflow in pdftotext in the process! (Turns out someone else had reported the same bug a few weeks before he found it, but it demonstrates everything regardless.)

[Michael] chose fuzz testing because using it to find security vulnerabilities is conceptually simple, actually doing it tends to require setting up a test environment with a complex workflow and a lot of dependencies. The result has a high degree of task specificity, and isn’t very portable or reusable. Nix allowed him to really simplify the process while also making it more adaptable. Be sure to check out part two, which goes into detail about how exactly one goes from discovering an input that crashes a program to tracking down (and patching) the reason it happened.

Making fuzz testing easier (and in a sense, cheaper) is something people have been interested in for a long time, even going so far as to see whether pressing a stack of single-board computers into service as dedicated fuzz testers made economic sense.

Fuzzy Skin Finish For 3D Prints, Now On Top Layers

[TenTech]’s Fuzzyficator brings fuzzy skin — a textured finish normally limited to sides of 3D prints — to the top layer with the help of some non-planar printing, no hardware modifications required. You can watch it in action in the video below, which also includes details on how to integrate this functionality into your favorite slicer software.

Little z-axis hops while laying down the top layer creates a fuzzy skin texture.

Fuzzyficator essentially works by moving the print nozzle up and down while laying down a top layer, resulting in a textured finish that does a decent job of matching the fuzzy skin texture one can put on sides of a print. Instead of making small lateral movements while printing outside perimeters, the nozzle does little z-axis hops while printing the top.

Handily, Fuzzyficator works by being called as a post-processing script by the slicer (at this writing, PrusaSlicer, Orca Slicer, and Bambu Studio are tested) which also very conveniently reads the current slicer settings for fuzzy skin, in order to match them.

Non-planar 3D printing opens new doors but we haven’t seen it work like this before. There are a variety of ways to experiment with non-planar printing for those who like to tinker with their printers. But there’s work to be done that doesn’t involve hardware, too. Non-planar printing also requires new ways of thinking about slicing.

Continue reading “Fuzzy Skin Finish For 3D Prints, Now On Top Layers”

Hear A Vintage Sound Chip Mimic The Real World

Sound chips from back in the day were capable of much more than a few beeps and boops, and [InazumaDenki] proves it in a video recreating recognizable real-world sounds with the AY-3-8910, a chip that was in everything from arcade games to home computers. Results are a bit mixed but it’s surprising how versatile a vintage sound chip that first came out in the late 70s is capable of, with the right configuration.

Recreating a sound begins by analyzing a spectrograph.

Chips like the AY-3-8910 work at a low level, and rely on being driven with the right inputs to generate something useful. It can generate up to three independent square-wave tones, but with the right approach and setup that’s enough to get outputs of varying recognizability for a pedestrian signal, bird call, jackhammer, and referee’s whistle.

To recreate a sound [InazumaDenki] begins by analyzing a recording with a spectrogram, which is a visual representation of frequency changes over time. Because real-world sounds consist of more than just one frequency (and the AY-3-8910 can only do three at once), this is how [InazumaDenki] chooses what frequencies to play, and when. The limitations make it an imperfect reproduction, but as you can hear for yourself, it can certainly be enough to do the job.

How does one go about actually programming the AY-3-8910? Happily there’s a handy Arduino AY3891x library by [Andreas Taylor] that makes it about as simple as can be to explore this part’s capabilities for yourself.

If you think retro-styled sound synthesis might fit into your next project, keep in mind that just about any modern microcontrollers has more than enough capability to do things like 80s-style speech synthesis entirely in software.

Continue reading “Hear A Vintage Sound Chip Mimic The Real World”