A Look At Sega’s 8-Bit 3D Glasses

From around 2012 onwards, there was a 3D viewing and VR renaissance in the entertainment industry. That hardware has grown in popularity, even if it’s not yet mainstream. However, 3D tech goes back much further, as [Nicole] shows us with a look at Sega’s ancient 8-bit 3D glasses [via Adafruit].

[Nicole]’s pair of Sega shutter glasses are battered and bruised, but she notes more modern versions are available using the same basic idea. The technology is based on liquid-crystal shutters, one for each eye. By showing the left and right eyes different images, it’s possible to create a 3D-vision effect even with very limited display hardware.

The glasses can be plugged directly into a Japanese Sega Master System, which hails from the mid-1980s. It sends out AC signals to trigger the liquid-crystal shutters via a humble 3.5mm TRS jack. Games like Space Harrier 3D, which were written to use the glasses, effectively run at a half-speed refresh rate. This is because of the 60 Hz NTSC or 50 Hz PAL screen refresh rate is split in half to serve each eye.  Unfortunately, though, the glasses don’t work on modern LCD screens, as their inherent display lag throws off the timing of the pulses the console sends to the glasses.

It’s a neat look at an ancient bit of display tech that had a small resurgence with 3DTVs in the 2010s. By and large, it seems like humans just aren’t that into 3D, at least beneath a full-VR experience. Meanwhile, if you’re wondering what 8-bit 3D looked like, we’ve got a 3D video (!) after the break.

Continue reading “A Look At Sega’s 8-Bit 3D Glasses”

Water Solves Mazes, Why Not Electrons?

A few weeks ago, we looked at a video showing water “solving” a maze. [AlphaPhoenix] saw the same video, and it made him think about electrons “finding the path of least resistance.” So can you solve a maze with foil, a laser cutter, a power supply, and some pepper? Apparently, as you can see in the video below.

At first, he duplicated the water maze, but without the effect of gravity. It was hard to see the water flow, so pepper flakes made the motion of the liquid quite obvious. The real fun, though, started when he cut the maze out of foil and started running electrons across it.

Continue reading “Water Solves Mazes, Why Not Electrons?”

Hacking An Apartment Garage Door With New Remotes

[Old Alaska] had a problem. He needed a second remote for his apartment garage door, but was quoted a fee in the hundreds of dollars for the trouble of sourcing and programming another unit. Realizing this was a rip-off given the cheap hardware involved, he decided to whip up his own sneaky solution instead.

It’s a simple hack, cheap and functional. An RF-activated relay with two remotes was sourced online for the princely sum of $8. [Old Alaska] then headed down to the equipment cabinet in the garage, opening the lock with the side of his own car key. He then wired the relay in parallel with the existing manual pushbutton for activating the garage door.

Sometimes, a hack doesn’t have to be complicated to be useful. Many of us might have jumped straight to trying to capture and emulate the existing remote’s radio signals. There was really no need. With physical access, [Old Alaska] was able to simply wire in his own remote entry setup himself.

We’ve seen similar hacks before, albeit achieved with SIGINT methods instead. Video after the break.

Continue reading “Hacking An Apartment Garage Door With New Remotes”

A RISC-V Supercluster For Very Low Cost

As ARM continues to make inroads in the personal computing space thanks to its more modern and streamlined instruction set architecture (ISA) and its reduced power demands especially compared to x86 machines, the main reason it continues to become more widespread is how easy it is to get a license to make chips using this ISA. It’s still not a fully open source instruction set, though, so if you want something even more easily accessible than ARM you’ll need to find something like these chips running the fully open-source RISC-V ISA and possibly put them to work in a custom supercluster.

[bitluni] recently acquired a large number of CH32V003 microcontrollers and managed to configure them all to work together in a cluster. The entire array is only $2 (not including all of the other components attached to the board) so a cluster of arbitrary size is potentially possible. [bitluni] built a four-layer PCB for this project with an 8-bit bus so the microcontrollers can communicate with each other. Each chip has its own ADC and I/O that are wired to a set of GPIO pins on the sides of the board. The build is rounded out with a USB interface for programming and power.

There were a few quirks to get this supercluster up and running, including some issues with the way the reset and debug pins work on these specific microcontrollers. With some bugs like this out of the way, the entire cluster is up and running, and [bitluni] hints that his design could be easily interfaced with even larger RISC-V superclusters. As for a use for this build, sometimes clusters like these are built just to build them, but since the I/O and ADCs are accessible in theory this cluster could do anything a larger microcontroller might be able to do, only at a much lower price.

Continue reading “A RISC-V Supercluster For Very Low Cost”

New Tool Helps Create Laser-Cut Doom Maps

Doom has a larger cultural footprint than the vast majority of video games ever made. That inspired [Theor] to see if it was possible to laser-cut some of the game’s maps to create a real-world model of those famous original levels.

Level data was extracted from the game’s original WAD data files using code written in Rust. Maps are described by multiple “lumps” within the WAD file format, each containing information on vertexes, walls, and floors. This data was scraped and converted into SVG files suitable for laser cutting. [Theor] then built a visualizer that could display what a stacked-up laser cut map would look like in 3D, to verify everything worked correctly. With that done, the map could be laser cut without worries that it would come out a jumbled, janky mess.

[Theor] kept the finished product simple, creating the map as a stack of blue acrylic pieces. We can imagine this tool being perfect for creating a high-quality diorama though, with some work done to paint the map to match what the player sees in game. If you happen to take that approach, don’t hesitate to notify the tipsline!

Last Chance To Re-engineer Education For The 2023 Hackaday Prize

The first round of the 2023 Hackaday Prize closes next Tuesday, March April 25th. If you’ve got an educational project – whether that’s a robot technique you just need to share, or an instructional radio build – you’ve got this weekend left to get your project into shape, whip up a Hackaday.io page in support, and enter. The top 10 projects get a $500 prize award, and a chance to win the big prizes in the final round. You want to get your project in now.

We’ve already seen some great entries into this first round of the Prize. Ranging from a trainer robot for First Robotics teams, through a complete learn-electronics kit on a PCB, building radios in High Schools, and all the way to an LED-and-lightpipe map to help teachers and students with their geography lessons, we’ve got a broad range of educational projects so far.

But there is still room for your project! And with the deadline closing in, your best bet at the $500 prize money relies on you burning a bit of the midnight oil this weekend, but Hackaday glory awaits those who do.

Peering Down Into Talking Ant Hill

Watching an anthill brings an air of fascination. Thousands of ants are moving about and communicating with other ants as they work towards a goal as a collective whole. For us humans, we project a complex inner world for each of these tiny creatures to drive the narrative. But what if we could peer down into a miniature world and the ants spoke English? (PDF whitepaper)

Researchers at the University of Stanford and Google Research have released a paper about simulating human behavior using multiple Large Language Models (LMM). The simulation has a few dozen agents that can move across the small town, do errands, and communicate with each other. Each agent has a short description to help provide context to the LLM. In addition, they have memories of objects, other agents, and observations that they can retrieve, which allows them to create a plan for their day. The memory is a time-stamped text stream that the agent reflects on, deciding what is important. Additionally, the LLM can replan and figure out what it wants to do.

The question is, does the simulation seem life-like? One fascinating example is the paper’s authors created one agent (Isabella) intending to have a Valentine’s Day party. No other information is included. But several agents arrive at the character’s house later in the day to party. Isabella invited friends, and those agents asked some people.

A demo using recorded data from an earlier demo is web-accessible. However, it doesn’t showcase the powers that a user can exert on the world when running live. Thoughts and suggestions can be issued to an agent to steer their actions. However, you can pause the simulation to view the conversations between agents. Overall, it is incredible how life-like the simulation can be. The language of the conversation is quite formal, and running the simulation burns significant amounts of computing power. Perhaps there can be a subconscious where certain behaviors or observations can be coded in the agent instead of querying the LLM for every little thing (which sort of sounds like what people do).

There’s been an exciting trend of combining LLMs with a form of backing store, like combining Wolfram Alpha with chatGPT. Thanks [Abe] for sending this one in!