A cuboctahedron (a kind of polyhedron) made out of LED filaments is being held above a man's hand in front a computer screen.

The Graph Theory Of Circuit Sculptures

Like many of us, [Tim]’s seen online videos of circuit sculptures containing illuminated LED filaments. Unlike most of us, however, he went a step further by using graph theory to design glowing structures made entirely of filaments.

The problem isn’t as straightforward as it might first appear: all the segments need to be illuminated, there should be as few powered junctions as possible, and to allow a single power supply voltage, all paths between powered junctions should have the same length. Ideally, all filaments would carry the same amount of current, but even if they don’t, the difference in brightness isn’t always noticeable. [Tim] found three ways to power these structures: direct current between fixed points, current supplied between alternating points so as to take different paths through the structure, and alternating current supplied between two fixed points (essentially, a glowing full-bridge rectifier).

To find workable structures, [Tim] represented circuits as directed graphs, with each junction being a vertex and each filament a directed edge, then developed filter criteria to find graphs corresponding to working circuits. In the case of power supplied from fixed points, the problem turned out to be equivalent to the edge-geodesic cover problem. Graphs that solve this problem are bipartite, which provided an effective filter criterion. The solutions this method found often had uneven brightness, so he also screened for circuits that could be decomposed into a set of paths that visit each edge exactly once – ensuring that each filament would receive the same current. He also found a set of conditions to identify circuits using rectifier-type alternating current driving, which you can see on the webpage he created to visualize the different possible structures.

We’ve seen some artistic illuminated circuit art before, some using LED filaments. This project doesn’t take exactly the same approach, but if you’re interested in more about graph theory and route planning, check out this article.

After 30 Years, Virtual Boy Gets Its Chance To Shine

When looking back on classic gaming, there’s plenty of room for debate. What was the best Atari game? Which was the superior 16-bit console, the Genesis or the Super NES? Would the N64 have been more commercially successful if it had used CDs over cartridges? It goes on and on. Many of these questions are subjective, and have no definitive answer.

But even with so many opinions swirling around, there’s at least one point that anyone with even a passing knowledge of gaming history will agree with — the Virtual Boy is unquestionably the worst gaming system Nintendo ever produced. Which is what makes its return in 2026 all the more unexpected.

Released in Japan and North America in 1995, the Virtual Boy was touted as a revolution in gaming. It was the first mainstream consumer device capable of showing stereoscopic 3D imagery, powered by a 20 MHz 32-bit RISC CPU and a custom graphics processor developed by Nintendo to meet the unique challenges of rendering gameplay from two different perspectives simultaneously.

In many ways it’s the forebear of modern virtual reality (VR) headsets, but its high cost, small library of games, and the technical limitations of its unique display technology ultimately lead to it being pulled from shelves after less than a year on the market.

Now, 30 years after its disappointing debut, this groundbreaking system is getting a second chance. Later this month, Nintendo will be releasing a replica of the Virtual Boy into which players can insert their Switch or Switch 2 console. The device essentially works like Google Cardboard, and with the release of an official emulator, users will be able to play Virtual Boy games complete with the 3D effect the system was known for.

This is an exciting opportunity for those with an interest in classic gaming, as the relative rarity of the Virtual Boy has made it difficult to experience these games in the way they were meant to be played. It’s also reviving interest in this unique piece of hardware, and although we can’t turn back the clock on the financial failure of the Virtual Boy, perhaps a new generation can at least appreciate the engineering that made it possible.

Continue reading “After 30 Years, Virtual Boy Gets Its Chance To Shine”

LED Interior Lighting Could Compromise Human Visual Performance

LED lighting is now commonplace across homes, businesses, and industrial settings. It uses little energy and provides a great deal of light. However, a new study suggests it may come with a trade-off. New research suggests human vision may not perform at its peak under this particular form of illumination.

The study ran with a small number of subjects (n=22) aged between 23 to 65 years. They were tested prior to the study for normal visual function and good health. Participants worked exclusively under LED lighting, with a select group then later also given supplemental incandescent light (with all its attendant extra wavelengths) in their working area—which appears to have been a typical workshop environment.

Incandescent bulbs have a much broader spectrum of output than even the best LEDs. Credit: Research paper

Notably, once incandescent lighting was introduced, those experimental subjects showed significant increases in visual performance using ChromaTest color contrast testing. This was noted across both tritan (blue) and protan (red) axes of the test, which involves picking out characters against a noisy background. Interestingly, the positive effect of the incandescent lighting did not immediately diminish when those individuals returned to using purely LED lighting once again. At tests 4 and 6 weeks after the incandescent lighting was removed, the individuals continued to score higher on the color contrast tests. Similar long-lasting effects have been noted in other studies involving supplementing LED lights with infrared wavelengths, however the boost has only lasted for around 5 days.

The exact mechanism at play here is unknown. The study authors speculate as to a range of complex physical and biological mechanisms that could be at play, but more research will be needed to tease out exactly what’s going on. In any case, it suggests there may be a very real positive effect on vision from the wider range of wavelengths provided by good old incandescent bulbs. As an aside, if you’ve figured out how to get 40/40 vision with a few cheap WS2812Bs, don’t hesitate to notify the tip line.

Thanks to [Keith Olson] for the tip!

How Resident Evil 2 For The N64 Kept Its FMV Cutscenes

Originally released for the Sony PlayStation in 1998, Resident Evil 2 came on two CDs and used 1.2 GB in total. Of this, full-motion video (FMV) cutscenes took up most of the space, as was rather common for PlayStation games. This posed a bit of a challenge when ported to the Nintendo 64 with its paltry 64 MB of cartridge-based storage. Somehow the developers managed to do the impossible and retain the FMVs, as detailed in a recent video by [LorD of Nerds]. Toggle the English subtitles if German isn’t among your installed natural language parsers.

Instead of dropping the FMVs and replacing them with static screens, a technological improvement was picked. Because of the N64’s rather beefy hardware, it was possible to apply video compression that massively reduced the storage requirements, but this required repurposing the hardware for tasks it was never designed for.

The people behind this feat were developers at Angel Studios, who had 12 months to make it work. Ultimately they achieved a compression ratio of 165:1, with software decoding handling the decompressing and the Reality Signal Processor (RSP) that’s normally part of the graphics pipeline used for both audio tasks and things like upscaling.

Continue reading “How Resident Evil 2 For The N64 Kept Its FMV Cutscenes”

AI. Where do you stand?

[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics

Over on YouTube you can see [Yang-Hui He] present to The Royal Institution about Mathematics: The rise of the machines.

In this one hour presentation [Yang-Hui He] explains how AI is driving progress in pure mathematics. He says that right now AI is poised to change the very nature of how mathematics is done. He is part of a community of hundreds of mathematicians pursuing the use of AI for research purposes.

[Yang-Hui He] traces the genesis of the term “artificial intelligence” to a research proposal from J. McCarthy, M.L. Minsky, N. Rochester, and C.E. Shannon dated August 31, 1955. He says that his mantra has become: connectivism leads to emergence, and goes on to explain what he means by that, then follows with universal approximation theorems.

He goes on to enumerate some of the key moments in AI: Descartes’s bête-machine, 1617; Lovelace’s speculation, 1842; Turing test, 1949; Dartmouth conference, 1956; Rosenblatt’s Perceptron, 1957; Hopfield’s network, 1982; Hinton’s Boltzmann machine, 1984; IBM’s Deep Blue, 1997; and DeepMind’s AlphaGo, 2012.

He continues with some navel-gazing about what is mathematics, and what is artificial intelligence. He considers how we do mathematics as bottom-up, top-down, or meta-mathematics. He mentions about one of his earliest papers on the subject Machine-learning the string landscape (PDF) and his books The Calabi–Yau Landscape: From Geometry, to Physics, to Machine Learning and Machine Learning in Pure Mathematics and Theoretical Physics.

He goes on to explain about Mathlib and the Xena Project. He discusses Machine-Assisted Proof by Terence Tao (PDF) and goes on to talk more about the history of mathematics and particularly experimental mathematics. All in all a very interesting talk, if you can find a spare hour!

In conclusion: Has AI solved any major open conjecture? No. Is AI beginning to help to advance mathematical discovery? Yes. Has AI changed the speaker’s day-to-day research routine? Yes and no.

If you’re interested in more fun math articles be sure to check out Digital Paint Mixing Has Been Greatly Improved With 1930s Math and Painted Over But Not Forgotten: Restoring Lost Paintings With Radiation And Mathematics.

Continue reading “[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics”

KDE Binds Itself Tightly To Systemd, Drops Support For Non-Systemd Systems

The KDE desktop’s new login manager (PLM) in the upcoming Plasma 6.6 will mark the first time that KDE requires that the underlying OS uses systemd, if one wishes for the full KDE experience. This has especially the FreeBSD community upset, but will also affect Linux distros that do not use systemd. The focus of the KDE team is clear, as stated in the referenced Reddit thread, where a KDE developer replies that the goal is to rely on systemd for more tasks in the future. This means that PLM is just the first step.

In the eyes of KDE it seems that OSes that do not use systemd are ‘niche’ and not worth supporting, with said niche Linux distros that would be cut out including everything from Gentoo to Alpine Linux and Slackware. Regardless of your stance on systemd’s merits or lack thereof, it would seem to be quite drastic for one of the major desktop environments across Linux and BSD to suddenly make this decision.

It also raises the question of in how far this is related to the push towards a distroless and similarly more integrated, singular version of Linux as an operating system. Although there are still many other DEs that will happily run for the foreseeable future on your flavor of GNU/Linux or BSD – regardless of whether you’re more about about a System V or OpenRC init-style environment – this might be one of the most controversial divides since systemd was first introduced.

Top image: KDE Plasma 6.4.5. (Credit: Michio.kawaii, Wikimedia)

Print-in-Place Gripper Does It With A Single Motor

[XYZAiden]’s concept for a flexible robotic gripper might be a few years old, but if anything it’s even more accessible now than when he first prototyped it. It uses only a single motor and requires no complex mechanical assembly, and nowadays 3D printing with flexible filament has only gotten easier and more reliable.

The four-armed gripper you see here prints as a single piece, and is cable-driven with a single metal-geared servo powering the assembly. Each arm has a nylon string threaded through it so when the servo turns, it pulls each string which in turn makes each arm curl inward, closing the grip. Because of the way the gripper is made, releasing only requires relaxing the cables; an arm’s natural state is to fall open.

The main downside is that the servo and cables are working at a mechanical disadvantage, so the grip won’t be particularly strong. But for lightweight, irregular objects, this could be a feature rather than a bug.

The biggest advantage is that it’s extremely low-cost, and simple to both build and use. If one has access to a 3D printer and can make a servo rotate, raiding a junk bin could probably yield everything else.

DIY robotic gripper designs come in all sorts of variations. For example, this “jamming” bean-bag style gripper does an amazing, high-strength job of latching onto irregular objects without squashing them in the process. And here’s one built around grippy measuring tape, capable of surprising dexterity.

Continue reading “Print-in-Place Gripper Does It With A Single Motor”