Sometimes it is hard to probe a circuit and then look over at the meter. [Electronoobs] decided to fix that problem by making a Google Glass-like multimeter using an OLED screen and Bluetooth module.
The custom PCB doesn’t have many surprises. A small board has a controller, a battery charger, a display, and a Bluetooth module. One thing he did forget is a switch, though, so the board is always on unless you arrange an external switch.
Of all our senses, the sense of touch is perhaps the most underappreciated. We understand and accept the tragedy that attends loss of vision or hearing, and the impact on the quality of life resulting from olfactory and gustatory sensations can be severe. But for some reason, we don’t give a second thought to our sense of touch, which is indeed strange given that we are literally covered with touch sensors. That’s a bit of a shame, since touch can reveal so much about the world around us, and our emotional well-being is so tightly tied to the tactile senses that those deprived of it in infancy can be scarred for life.
Haptics is the technology of tactile feedback, which seeks to leverage the human need for tactile experiences to enrich the experience of dealing with the technological world. Haptic feedback devices are everywhere now, and have gone far beyond the simple off-balance motor used since the days when a pager was a status symbol. To help us sort out what’s new in the haptics world, Tim and Kyle from Nanoport Technology will stop by the Hack Chat. Nanoport is a company on the cutting edge of haptics, so they’ll have a wealth of details about what haptics are, where the field is going, and how you can start thinking about making touch a part of your projects.
Augmented reality (AR) technology hasn’t enjoyed the same amount of attention as VR, and seriously lags in terms of open source development and accessibility. Frustrated by this, [Arnaud Atchimon] created CheApR, an open source, low cost AR headset that anyone can build at home and use as a platform for further development
[Arnaud] was impressed by the Tilt Five AR goggles, but the price of this cutting edge hardware simply put it out of reach of most people. Instead, he designed and built his own around a 3D printed frame, ESP32, cheap LCDs, and lenses from a pair of sunglasses. The electronics is packed horizontally in the top of the frame, with the displays pointed down into a pair of angled mirrors, which reflect the image onto the sunglasses lenses and into the user’s eyes. [Arnaud] tested a number of different lenses and found that a thin lens with a slight curve worked best. The ESP32 doesn’t actually run the main software, it just handles displaying the images on the LCDs. The images are sent from a computer running software written in Processing. Besides just displaying images, the software can also integrate inputs from a MPU6050 IMU and ESP32 camera module mounted on the goggles. This allows the images to shift perspective as the goggles move, and recognize faces and AR markers in the environment.
In a sign of the times, the Federal Communications Commission has officially signed off on remote testing sessions for amateur radio licensing in the United States. Testing in the US is through the Volunteer Examiner Coordinator program, which allows teams of at least three Volunteer Examiners to set up in-person testing sessions where they proctor amateur radio licensing exams. The VEs take their jobs very seriously and take pride in offering exam sessions on a regular schedule, so when social distancing rules made their usual public testing venues difficult to access, many of them quickly pivoted to remote testing using teleconferencing applications. Here’s hoping that more VEs begin offering remote testing sessions.
Another aspect of life changed by COVID-19 and social distancing rules has been the simple pleasure of a trip to the museum. And for the museums themselves, the lack of visitors can be catastrophic, both in terms of fulfilling their educational and research missions and through the lack of income that results. To keep the flame alive in a fun way, Katrina Bowen from The Centre for Computing History in Cambridge has recreated her museum in loving detail in Animal Crossing: New Leaf. For being limited to what’s available in the game, Katrina did a remarkable job on the virtual museum; we especially like the Megaprocessor wallpaper. She even managed to work in that staple last stop of every museum, the gift shop.
To the surprise of few, “spatial computing” startup Magic Leap has announced that it is laying off half its workforce as it charts a new course. The company, which attracted billions in funding based on its virtual retinal display technology, apparently couldn’t sell enough of their Magic Leap One headsets to pay the bills. The company is swiveling to industrial users, which honestly seems like a better application for their retinal display technology than the consumer or gaming markets.
And finally, as if 2020 hasn’t been weird enough already, the Department of Defense has officially released videos of what it calls “unidentified aerial phenomena.” These videos, taken from the head-up displays of US Navy fighter jets, had previously been obtained by private parties and released to the public. Recorded between 2004 and 2015, the videos appear to show objects that are capable of extremely high-speed flight and tight maneuvers close to the surface of the ocean. We find the timing of the release suspicious, almost as if the videos are intended to serve as a distraction from the disturbing news of the day. We want to believe we’re not alone, but these videos don’t do much to help.
For most of human history, the way to get custom shapes and colors onto one’s retinas was to draw it on a cave wall, or a piece of parchment, or on paper. Later on, we invented electronic displays and used them for everything from televisions to computers, even toying with displays that gave the illusion of a 3D shape existing in front of us. Yet what if one could just skip this surface and draw directly onto our retinas?
Admittedly, the thought of aiming lasers directly at the layer of cells at the back of our eyeballs — the delicate organs which allow us to see — likely does not give one the same response as you’d have when thinking of sitting in front of a 4K, 27″ gaming display to look at the same content. Yet effectively we’d have the same photons painting the same image on our retinas. And what if it could be an 8K display, cinema-sized. Or maybe have a HUD overlay instead, like in video games?
In many ways, this concept of virtual retinal displays as they are called is almost too much like science-fiction, and yet it’s been the subject of decades of research, with increasingly more sophisticated technologies making it closer to an every day reality. Will we be ditching our displays and TVs for this technology any time soon?
“Know your enemy” is the essence of one of the most famous quotes from [Sun Tzu]’s Art of War, and it’s as true now as it was 2,500 years ago. It also applies far beyond the martial arts, and as the world squares off for battle against COVID-19, it’s especially important to know the enemy: the novel coronavirus now dubbed SARS-CoV-2. And now, augmented reality technology is giving a boost to search for fatal flaws in the virus that can be exploited to defeat it.
The video below is a fascinating mix of 3D models of viral structures, like the external spike glycoproteins that give coronaviruses their characteristic crown appearance, layered onto live video of [Tom Goddard], a programmer/analysts at the University of California San Francisco. The tool he’s using is called ChimeraX, a molecular visualization program developed by him and his colleagues. He actually refers to this setup as “mixed reality” rather than “augmented reality”, to stress the fact that AR tends to be an experience that only the user can fully appreciate, whereas this system allows him to act as a guide on a virtual tour of the smallest of structures.
Using a depth-sensing camera and a VR headset, [Tom] is able to manipulate 3D models of the SARS virus — we don’t yet have full 3D structure data for the novel coronavirus proteins — to show us exactly how SARS binds to its receptor, angiotensin-converting enzyme-2 (ACE-2), a protein expressed on the cell surfaces of many different tissue types. It’s fascinating to see how the biding domain of the spike reaches out to latch onto ACE-2 to begin the process of invading a cell; it’s also heartening to watch [Tom]’s simulation of how the immune system responds to and blocks that binding.
It looks like ChimeraX and similar AR systems are going to prove to be powerful tools in the fight against not just COVID-19, but in all kinds of infectious diseases. Hats off to [Tom] and his team for making them available to researchers free of charge.
Thus far, the vast majority of human photographic output has been two-dimensional. 3D displays have come and gone in various forms over the years, but as technology progresses, we’re beginning to see more and more immersive display technologies. Of course, to use these displays requires content, and capturing that content in three dimensions requires special tools and techniques. Kim Pimmel came down to Hackaday Superconference to give us a talk on the current state of the art in advanced AR and VR camera technologies.
Kim has plenty of experience with advanced displays, with an impressive resume in the field. Having worked on Microsoft’s Holo Lens, he now leads Adobe’s Aero project, an AR app aimed at creatives. Kim’s journey began at a young age, first experimenting with his family’s Yashica 35mm camera, where he discovered a love for capturing images. Over the years, he experimented with a wide variety of gear, receiving a Canon DSLR from his wife as a gift, and later tinkering with the Stereorealist 35mm 3D camera. The latter led to Kim’s growing obsession with three-dimensional capture techniques.
Through his work in the field of AR and VR displays, Kim became familiar with the combination of the Ricoh Theta S 360 degree camera and the Oculus Rift headset. This allowed users to essentially sit inside a photo sphere, and see the image around them in three dimensions. While this was compelling, [Kim] noted that a lot of 360 degree content has issues with framing. There’s no way to guide the observer towards the part of the image you want them to see.