Here’s How That Disney 360° Treadmill Works

One thing going slightly viral lately is footage of Disney’s “HoloTile” infinite floor, an experimental sort of 360° treadmill developed by [Lanny Smoot]. But how exactly does it work? Details about that are less common, but [Marques Brownlee] got first-hand experience with HoloTile and has a video all about the details.

HoloTile is a walking surface that looks like it’s made up of blueish bumps or knobs of some kind. When one walks upon the surface, it constantly works to move its occupant back to the center.

Whenever one moves, the surface works to move the user back to the center.

Each of these bumps is in fact a disk that has the ability spin one way or another, and pivot in different directions. Each disk therefore becomes a sort of tilted wheel whose edge is in contact with whatever is on its surface. By exerting fine control over each of these actuators, the control system is able to create a conveyor-belt like effect in any arbitrary direction. This can be leveraged in several different ways, including acting as a sort of infinite virtual floor.

[Marques] found the system highly responsive and capable of faster movement that many would find comfortable. When walking on it, there is a feeling of one’s body moving in an unexpected direction, but that was something he found himself getting used to. He also found that it wasn’t exactly quiet, but we suppose one can’t have everything.

How this device works has a rugged sort of elegant brute force vibe to it that we find appealing. It is also quite different in principle from other motorized approaches to simulate the feeling of walking while keeping the user in one place.

The whole video is embedded just below the page break, but if you’d like to jump directly to [Marques] explaining and showing exactly how the device works, you can skip to the 2:22 mark.

Continue reading “Here’s How That Disney 360° Treadmill Works”

A Vernier Take On A 3D Printer Extruder Indicator

A common way to visualize that a 3D printer’s extruder motor — which feeds the filament into the hot end — is moving is to attach a small indicator to the exposed end of the motor’s shaft. As the shaft turns, so does the attached indicator.

Small movements of the motor are therefore turned into larger movements of something else. So far, so simple. But what about visualizing very small extrusions, such as those tiny ones made during ironing?

[Jack]’s solution is a Vernier indicator for the extruder. Even the smallest movements of the extruder motor’s shaft are made clearly visible by such a device, as shown in the header image above. Vernier scales are more commonly found on measurement tools, and the concept is somewhat loosely borrowed here.

The usual way these lightweight indicators are attached is with a small magnet, and you can read all about them and see examples here.

This new design is basically the same, it simply has a background in a contrasting color added into the mix. [Jack]’s design is intended for the Bambu A1 printer, but the idea can be easily adapted. Give it a look if you find yourself yearning for a bit more visibility in your extruder movements.

Making Beer Like It’s 1574, For Science And Heritage

Are you interested in the history of beer, food science, or just a fan of gathering “um, actually” details about things? Well you’re in for a treat because FoodCult (exploring Food, Culture, and Identity in early modern Ireland) has a fantastic exhibition showcasing their recreation of beer last brewed in the sixteenth century by putting serious scientific work into it, and learning plenty in the process.

A typical historical beer of middling strength was around 5% alcohol by volume, similar to a modern-day lager.

The recipes, equipment and techniques are straight from what was used at Dublin Castle in the late 1500s. This process yielded very interesting insights about what beer back then was really like, how strong it was, and what was involved in the whole process.

Documentation from the era also provides cultural insight. Beer was often used to as payment and provided a significant amount of dietary energy. Dublin Castle, by the way, consumed some 26,000 gallons per year.

In many ways, beer from back then would be pretty familiar today, but there are differences as well. Chief among them are the ingredients.

While the ingredients themselves are unsurprising in nature, it is in fact impossible to 100% recreate the beer from 1574 for a simple reason: these ingredients no longer exist as they did back then. Nevertheless, the team did an inspired job of getting as close as possible to the historical versions of barley, oats, hops, yeast, and even the water. Continue reading “Making Beer Like It’s 1574, For Science And Heritage”

Sound And Water Make Weird Vibes In Microgravity

NASA astronaut [Don Pettit] shared a short video from an experiment he performed on the ISS back in 2012, demonstrating the effects of sound waves on water in space. Specifically, seeing what happens when a sphere of water surrounding an air bubble perched on a speaker cone is subjected to a variety of acoustic waves.

The result is visually striking patterns across different parts of the globe depending on what kind of sound waves were created. It’s a neat visual effect, and there’s more where that came from.

[Don] experimented with music as well as plain tones, and found that cello music had a particularly interesting effect on the setup. Little drops of water would break off from inside the sphere and start moving around the inside of the air bubble when cello music was played. You can see this in action as part of episode 160 from SmarterEveryDay (cued up to 7:51) which itself is about exploring the phenomenon of how water droplets can appear to act in an almost hydrophobic way.

This isn’t the first time water and sound collide in visually surprising ways. For example, check out the borderline optical illusion that comes from pouring water past a subwoofer emitting 24 Hz while the camera captures video at 24 frames per second.

Make 3D Scenes With A Holodeck-Like Voice Interface

The voice interface for the holodeck in Star Trek had users create objects by saying things like “create a table” and “now make it a metal table” and so forth, all with immediate feedback. This kind of interface may have been pure fantasy at the time of airing, but with the advent of AI and LLMs (large language models) this kind of natural language interface is coming together almost by itself.

A fun demonstration of that is [Dominic Pajak]’s demo project called VoxelAstra. This is a WebXR demo that works both in the Meta Quest 3 VR headset (just go to the demo page in the headset’s web browser) as well as on desktop.

The catch is that since the program uses OpenAI APIs on the back end, one must provide a working OpenAI API key. Otherwise, the demo won’t be able to do anything. Providing one’s API key to someone’s web page isn’t terribly good security practice, but there’s also the option of running the demo locally.

Either way, once the demo is up and running the user simply tells the system what to create. Just keep it simple. It’s a fun and educational demo more than anything and will try to do its work with primitive shapes like spheres, cubes, and cylinders. “Build a snowman” is suggested as a good starting point.

Intrigued by what you see and getting ideas of your own? WebXR can be a great way to give those ideas some life and looking at how someone else did something similar is a fine way to begin. Check out another of [Dominic]’s WebXR projects: a simulated BBC Micro, in VR.

Corral Some Zippy Blue Flames Into 3D Printed Troughs

[Steve Mould] came across an interesting little phenomenon of blue flames zipping around a circular track. This led to diving down a bit of a rabbit hole about excitable mediums, ultimately leading him to optimize the shapes and come up with some pretty wild variations which he shows off in a video (also embedded below.)

After figuring out that the moving flame depended on combustion of fuel vapor in an environment that didn’t allow for the whole surface to stay lit at once, [Steve] tried to optimize the design of 3d-printed channels and raceways to encourage this effect, and he came up with some pretty novel ones. The 3D models are here if you’d like to try them for yourself (we especially like the “figure eight” and “rays” models.)

The video is an excellent show & tell of everything [Steve] dove into, complete with plenty of demonstrations of harnessing this effect to create some nifty running flames. Check it out in the video below, and if unintuitive physical effects are your thing, don’t miss [Steve]’s peeling apart of the turntable paradox.

Continue reading “Corral Some Zippy Blue Flames Into 3D Printed Troughs”

Train A GPT-2 LLM, Using Only Pure C Code

[Andrej Karpathy] recently released llm.c, a project that focuses on LLM training in pure C, once again showing that working with these tools isn’t necessarily reliant on sprawling development environments. GPT-2 may be older but is perfectly relevant, being the granddaddy of modern LLMs (large language models) with a clear heritage to more modern offerings.

LLMs are fantastically good at communicating despite not actually knowing what they are saying, and training them usually relies on PyTorch deep learning library, itself written in Python. llm.c takes a simpler approach by implementing the neural network training algorithm for GPT-2 directly. The result is highly focused and surprisingly short: about a thousand lines of C in a single file. It is a highly elegant process that does the same thing the bigger, clunkier methods accomplish. It can run entirely on a CPU, or it can take advantage of GPU acceleration, where available.

This isn’t the first time [Andrej Karpathy] has bent his considerable skills and understanding towards boiling down these sorts of concepts into bare-bones implementations. We previously covered a project of his that is the “hello world” of GPT, a tiny model that predicts the next bit in a given sequence and offers low-level insight into just how GPT (generative pre-trained transformer) models work.