A man is looking at a volumetric display while using one finger to interact with it. Two roughly-spherical blue shapes are visible in the display, and he is moving his index finger toward one of them.

Elastic Bands Enable Touchable Volumetric Display

Amazing as volumetric displays are, they have one major drawback: interacting with them is complicated. A 3D mouse is nice, but unless you’ve done a lot of CAD work, it’s a bit unintuitive. Researchers from the Public University of Navarra, however, have developed a touchable volumetric display, bringing touchscreen-like interactions to the third dimension (preprint paper).

At the core, this is a swept-volume volumetric display: a light-diffusing screen oscillates along one axis, while from below a projector displays cross-sections of the scene in synchrony with the position of the screen. These researchers replaced the normal screen with six strips of elastic material. The finger of someone touching the display deforms one or more of the strips, allowing the touch to be detected, while also not damaging the display.

The actual hardware is surprisingly hacker-friendly: for the screen material, the researchers settled on elastic bands intended for clothing, and two modified subwoofers drove the screen’s oscillation. Indeed, some aspects of the design actually cite this Hackaday article. While the citation misattributes the design, we’re glad to see a hacker inspiring professional research.) The most exotic component is a very high-speed projector (on the order of 3,000 fps), but the previously-cited project deals with this by hacking a DLP projector, as does another project (also cited in this paper as source 24) which we’ve covered.

While interacting with the display does introduce some optical distortions, we think the video below speaks for itself. If you’re interested in other volumetric displays, check out this project, which displays images with a levitating styrofoam bead.

Continue reading “Elastic Bands Enable Touchable Volumetric Display”

DIY 3D Hand Controller Using A Webcam And Scripting

Are you ready to elevate your interactive possibilities without breaking the bank? If so, explore [Caio Bassetti]’s tutorial on creating a full 3D hand controller using only a webcam, MediaPipe Hands, and Three.js. This hack lets you transform a 2D screen into a fully interactive 3D scene—all with your hand movements. If you’re passionate about low-cost, accessible tech, try this yourself – not much else is needed but a webcam and a browser!

The magic of the project lies in using MediaPipe Hands to track key points on your hand, such as the middle finger and wrist, to calculate depth and positioning. Using clever Three.js tricks, the elements can be controlled on a 3D axis. This setup creates a responsive virtual controller, interpreting hand gestures for intuitive movement in the 3D space. The hack also implements a closed-fist gesture to grab and drag objects and detects collisions to add interactivity. It’s a simple, practical build and it performs reliably in most browsers.

For more on this innovation or other exciting DIY hand-tracking projects, browse our archive on gesture control projects, or check out the full article on Codrops. With tools such as MediaPipe and Three.js, turning ideas into reality gets more accessible than ever.

DOOM On A Volumetric Display

There’s something magical about volumetric displays. They really need to be perceived in person, and no amount of static or video photography will ever do them justice. [AncientJames] has built a few, and we’re reporting on his progress, mostly because he got it to run a playable port of DOOM.

Base view of an earlier version showing the motor drive and PSU

As we’ve seen before, DOOM is very much a 3D game viewed on a 2D display using all manner of clever tricks and optimizations. The background visual gives a 3D effect, but the game’s sprites are definitely very solidly in 2D land. As we’ll see, that wasn’t good enough for [James].

The basic concept relies on a pair of 128 x 64 LED display matrix modules sitting atop a rotating platform. The 3D printed platform holds the displays vertically, with the LEDs lined up with the diameter, meaning the electronics hang off the back, creating some imbalance.

Lead, in the form of the type used for traditional window leading, was used as a counterbalance. A Raspberry Pi 4 with a modified version of this LED driver HAT is rotating with the displays. The Pi and both displays are fed power from individual Mini560 buck modules, taking their input from a 12 V 100 W Mean-Well power supply via a car alternator slip ring setup. (Part numbers ABH6004S and ASL9009  for those interested.) Finally, to synchronise the setup, a simple IR photo interrupter signals the Pi via an interrupt.

Continue reading DOOM On A Volumetric Display”

Hackaday Links Column Banner

Hackaday Links: June 23, 2024

When a ransomware attack targets something like a hospital, it quickly becomes a high-profile event that understandably results in public outrage. Hospitals are supposed to be backstops for society, a place to go when it all goes wrong, and paralyzing their operations for monetary gain by taking over their information systems is just beyond the pale. Tactically, though, it makes sense; their unique position in society seems to make it more likely that they’ll pay up.

Which is why the ongoing cyberattack against car dealerships is a little perplexing — can you think of a less sympathetic victim apart from perhaps the Internal Revenue Service? Then again, we’re not in the ransomware business, so maybe this attack makes good financial sense. And really, judging by the business model of the primary target of these attacks, a company called CDK Global, it was probably a smart move. We had no idea that there was such a thing as a “Dealer Management System” that takes care of everything from financing to service, and that shutting down one company’s system could cripple an entire industry, but there it is.

Continue reading “Hackaday Links: June 23, 2024”

Make 3D Scenes With A Holodeck-Like Voice Interface

The voice interface for the holodeck in Star Trek had users create objects by saying things like “create a table” and “now make it a metal table” and so forth, all with immediate feedback. This kind of interface may have been pure fantasy at the time of airing, but with the advent of AI and LLMs (large language models) this kind of natural language interface is coming together almost by itself.

A fun demonstration of that is [Dominic Pajak]’s demo project called VoxelAstra. This is a WebXR demo that works both in the Meta Quest 3 VR headset (just go to the demo page in the headset’s web browser) as well as on desktop.

The catch is that since the program uses OpenAI APIs on the back end, one must provide a working OpenAI API key. Otherwise, the demo won’t be able to do anything. Providing one’s API key to someone’s web page isn’t terribly good security practice, but there’s also the option of running the demo locally.

Either way, once the demo is up and running the user simply tells the system what to create. Just keep it simple. It’s a fun and educational demo more than anything and will try to do its work with primitive shapes like spheres, cubes, and cylinders. “Build a snowman” is suggested as a good starting point.

Intrigued by what you see and getting ideas of your own? WebXR can be a great way to give those ideas some life and looking at how someone else did something similar is a fine way to begin. Check out another of [Dominic]’s WebXR projects: a simulated BBC Micro, in VR.

Stepping Inside Art In VR, And The Workflow Behind It

The process of creating something is always chock-full of things to learn, so it’s always a treat when someone takes the time and effort to share it. [Teadrinker] recently published the technique and workflow behind bringing art into VR, which explains exactly how they created a virtual reality art gallery that allows one to step inside paintings, called Art Plunge (free on Steam.)

Extending a painting’s content to fill in the environment is best done by using other works by the same artist.

It walks through not just how to obtain high-resolution images of paintings, but also discusses how to address things like adjusting the dynamic range and color grading to better match the intended VR experience. There is little that is objectively correct in technical terms when it comes to the aesthetic presentation details like brightness and lighting, so guidance on what does and doesn’t work well and how to tailor to the VR experience is useful information.

One thing that is also intriguing is the attention paid to creating a sense of awe for viewers. The quality, the presentation, and even choosing sounds are all important for creating something that not only creates a sense of awe, but does so in a way that preserves and cultivates a relationship between the art and the viewer that strives to stay true to the original. Giving a viewer a sense of presence, after all, can be more than just presenting stereoscopic 3D images or fancy lightfields.

You can get a brief overview of the process in a video below, but if you have the time, we really do recommend reading the whole breakdown.

Continue reading “Stepping Inside Art In VR, And The Workflow Behind It”

Explore Neural Radiance Fields In Real-time, Even On A Phone

Neural Radiance Fields (NeRF) is a method of reconstructing complex 3D scenes from sparse 2D inputs, and the field has been growing by leaps and bounds. Viewing a reconstructed scene is still nontrivial, but there’s a new innovation on the block: SMERF is a browser-based method of enabling full 3D navigation of even large scenes, efficient enough to render in real time on phones and laptops.

Don’t miss the gallery of demos which will run on anything from powerful desktops to smartphones. Notable is the distinct lack of blurry, cloudy, or distorted areas which tend to appear in under-observed areas of a NeRF scene (such as indoor corners and ceilings). The technical paper explains SMERF’s approach in more detail.

NeRFs as a concept first hit the scene in 2020 and the rate of advancement has been simply astounding, especially compared to demos from just last year. Watch the short video summarizing SMERF below, and marvel at how it compares to other methods, some of which are themselves only months old.

Continue reading “Explore Neural Radiance Fields In Real-time, Even On A Phone”