Star Trackers: Telling Up From Down In Any Space

Keeping track of position is crucial in a lot of situations. On Earth, it’s usually relatively straight-forward, with systems having been developed over the centuries that would allow one to get at least a rough fix on one’s position on this planet. But for a satellite out in space, however, it’s harder. How do they keep their communications dishes pointed towards Earth?

The stars are an obvious orientation point. The Attitude and Articulation Control Subsystem (AACS) on the Voyager 1 and 2 space probes has the non-enviable task of keeping the spacecraft’s communication dish aligned precisely with a communications dish back on Earth, which from deep space is an incomprehensibly tiny target.

Back on Earth, the star tracker concept has become quite popular among photographers who try to image the night skies. Even in your living room,  VR systems also rely on knowing the position of the user’s body and any peripherals in space. In this article we’ll take a look at the history and current applications of this type of position tracking. Continue reading “Star Trackers: Telling Up From Down In Any Space”

Evolution Of A Backpack VR System

Persistence is what a hacker needs to make it to their goal. That’s exactly what it took for [Erik] to make an untethered VR backpack system.

Starting way back in the Spring of 2019, [Erik] began working on an untethered VR system. Sure, the Oculus Quest was coming out, but it wouldn’t be compatible with the game library of PC based systems. [Erik] decided he wanted the best of both worlds, so he decided to build a backpack that carries a computer powerful enough to drive the Rift S.

The initial system was to use a cut-up backpack, an HP mini PC with an external Nvidia 1060 GPU, and a basic DC-DC converter. The result? Just about nothing worked. The HP’s boot process didn’t play well with an external GPU.

[Erik] went through several iterations of this project. He switched over to a standard PC motherboard and tried a few different DC-DC converters. He settled on a device from HDPLEX rated at 200 watts continuous. The converter plugs directly into a standard 24-pin ATX motherboard power connector and isn’t much larger than the connector itself.

The old backpack with its added padding and wood frame gave way to a Zotac VR go backpack. Only the straps and frame of the Zotac are used, with [Erik’s] custom parts mounted using plywood and 3D printed parts. The outer frame is aluminum, with acrylic panels.

Power comes from 7000 mAH LiFe batteries, with each pack providing an hour of runtime. The Backpack can hold two packs though, so wiring them up in parallel should double that runtime.

We have to say this is an extremely well-documented build. [Erik] explains how he chose each component and the advantages (and pitfalls) of the choices he made. An example would be the RAM he picked. He chose DDR4 with a higher spec than he needed, just so he could undervolt the parts for longer run-times.

Not everything in VR is fun and games though – you can ditch that monitor and go with a VR desktop.

Esper Makes Virtual Reality From Live Reality

There’s a scene in Bladerunner where Deckard puts a photograph in a magical machine that lets him zoom and enhance without limit, and even see around obstacles. In today’s climate, this is starting to seem more plausible, what with all the cameras everywhere. [Jasper van Loenen] explores this concept in Esper, a technological art installation he created in Seoul, Korea during an artist residency.

Esper is a two-part piece that turns virtual reality on its head by showing actual reality in VR. It covers two adjoining rooms, one to record reality, and the other for real-time virtual viewing on headsets. The first is outfitted with 60 ESP32 cameras on custom mounts, all pointing in different directions from various perches and ceiling drops. [Jasper] used an Android app based on openFrameworks to map the cameras’ locations in 3D space. The room next door is so empty, it’s even devoid of FOMO. You don’t want to miss this one, so check it out after the break.

Recreating sci-fi props is all fun and games until the dystopia arrives. Then again, the fact that we can all easily access 70,000 or so insecure surveillance cameras is a pretty good start.

Continue reading “Esper Makes Virtual Reality From Live Reality”

Building Cameras For The Immersive Future

Thus far, the vast majority of human photographic output has been two-dimensional. 3D displays have come and gone in various forms over the years, but as technology progresses, we’re beginning to see more and more immersive display technologies. Of course, to use these displays requires content, and capturing that content in three dimensions requires special tools and techniques. Kim Pimmel came down to Hackaday Superconference to give us a talk on the current state of the art in advanced AR and VR camera technologies.

[Kim]’s interest in light painting techniques explored volumetric as well as 2D concepts.
Kim has plenty of experience with advanced displays, with an impressive resume in the field. Having worked on Microsoft’s Holo Lens, he now leads Adobe’s Aero project, an AR app aimed at creatives. Kim’s journey began at a young age, first experimenting with his family’s Yashica 35mm camera, where he discovered a love for capturing images. Over the years, he experimented with a wide variety of gear, receiving a Canon DSLR from his wife as a gift, and later tinkering with the Stereorealist 35mm 3D camera. The latter led to Kim’s growing obsession with three-dimensional capture techniques.

Through his work in the field of AR and VR displays, Kim became familiar with the combination of the Ricoh Theta S 360 degree camera and the Oculus Rift headset. This allowed users to essentially sit inside a photo sphere, and see the image around them in three dimensions. While this was compelling, [Kim] noted that a lot of 360 degree content has issues with framing. There’s no way to guide the observer towards the part of the image you want them to see.

Continue reading “Building Cameras For The Immersive Future”

Tricking The Brain Into Seeing Boosted Contrast In Stereo Imagery

Last year a team of researchers published a paper detailing a method of boosting visual contrast and image quality in stereoscopic displays. The method is called Dichoptic Contrast Enhancement (DiCE) and works by showing each eye a slightly different version of an image, tricking the brain into fusing the two views together in a way that boosts perceived image quality. This only works on stereoscopic displays like VR headsets, but it’s computationally simple and easily implemented. This trick could be used to offset some of the limitations of displays used in headsets, for example making them appear capable of deeper contrast levels than they can physically deliver. This is good, because higher contrasts are generally perceived as being more realistic and three-dimensional; important factors in VR headsets and other stereoscopic displays.

Stereoscopic vision works by having the brain fuse together what both eyes see, and this process is called binocular fusion. The small differences between what each eye sees mostly conveys a sense of depth to us, but DiCE uses some of the quirks of binocular fusion to trick the brain into perceiving enhanced contrast in the visuals. This perceived higher contrast in turn leads to a stronger sense of depth and overall image quality.

Example of DiCE-processed images, showing each eye a different dynamic contrast range. The result is greater perceived contrast and image quality when the brain fuses the two together.

To pull off this trick, DiCE displays a different contrast level to both eyes in a way designed to encourage the brain to fuse them together in a positive way. In short, using a separate and different dynamic contrast range for each eye yields an overall greater perceived contrast range in the fused image. That’s simple in theory, but in practice there were a number of problems to solve. Chief among them was the fact that if the difference between what each eyes sees is too great, the result is discomfort due to binocular rivalry. The hard scientific work behind DiCE came from experimentally determining sweet spots, and pre-computing filters independent of viewer and content so that it could be applied in real-time for a consistent result.

Things like this are reminders that we experience the world only through the filter of our senses, and our perception of reality has quirks that can be demonstrated by things like this project and other “sensory fusion” edge cases like the Thermal Grill Illusion, which we saw used as the basis for a replica of the Pain Box from Dune.

A short video overview of the method is embedded below, and a PDF of the publication can be downloaded for further reading. Want a more hands-on approach? The team even made a DiCE plugin (freely) available from the Unity asset store.

Continue reading “Tricking The Brain Into Seeing Boosted Contrast In Stereo Imagery”

P-51 Cockpit Recreated With Help Of Local Makerspace

It’s surprisingly easy to misjudge tips that come into the Hackaday tip line. After filtering out the omnipresent spam, a quick scan of tip titles will often form a quick impression that turns out to be completely wrong. Such was the case with a recent tip that seemed from the subject line to be a flight simulator cockpit. The mental picture I had was of a model cockpit hooked to Flight Simulator or some other off-the-shelf flying game, many of which we’ve seen over the years.

I couldn’t have been more wrong about the project that Grant Hobbs undertook. His cockpit simulator turned out to be so much more than what I thought, and after trading a few emails with him to get all the details, I felt like I had to share the series of hacks that led to the short video below and the story about how he somehow managed to build the set despite having no previous experience with the usual tools of the trade.

Continue reading “P-51 Cockpit Recreated With Help Of Local Makerspace”