Here’s one that proves a hardware project can go beyond blinking LEDs and dumping massive chunks of data onto a serial console. Those practices are fine for some, but [dimtass] has found a more elegant hack for a more civilized age. His 3D Millennium Falcon model gets orientation data from his IMU as an an HID device.
The hardware involved is an MPU6050 6-axis sensor that is interfaced with a Teensy 3.2 board. [dimtass] documents his approach to calibrating the IMU going a bit further by using a Python script to generate offsets. We’ve advocated using Jupyter notebooks in the past and this is a good example of Jupyter plotting the data and visualizing the effect of the offsets in a second pass.
When in action, the Teensy reads IMU data and sends it over a USB RAW HID interface. For the uninitiated, HID transfers are more reliable than USB CDC transfers (virtual serial port) because they use smaller data chunks per event/transaction and usually don’t require special drivers. On the computer side, [dimtass] has written a small application that gets the IMU values over the RAW HID and then provides it to the visualization application.
A 3D Millennium Falcon model is rendered in Unity, the popular open source game development engine. Even though Unity has an API, this particular approach is more OS specific using a shared-memory technique. The HID application writes to a file (/tmp/hid-shared-buffer) which is then read by Unity to make orientation changes to the rendered model.
[dimtass] provides lots of details on the tools used to bring his project to life and it can be a great starting point for more projects that need interfacing sensors with a visualization system. We have seen ways to turn a person’s head into a joystick and if you need a deeper dive into Unity, look no further.
Continue reading “Millenium Falcon HID: Get Unity To Talk To Teensy”
In first-person games, an effective way to heighten immersion is to give the player a sense of impact and force by figuratively shaking the camera. That’s a tried and true practice for FPS games played on a monitor, but to [Zulubo]’s knowledge, no one has implemented traditional screen shake in a VR title because it would be a sure way to trigger motion sickness. Unsatisfied with that limitation, some clever experimentation led [Zulubo] to a method of doing screen shake in VR that doesn’t cause any of the usual problems.
Screen shake doesn’t translate well to VR because the traditional method is to shake the player’s entire view. This works fine when viewed on a monitor, but in VR the brain interprets the visual cue as evidence that one’s head and eyeballs are physically shaking while the vestibular system is reporting nothing of the sort. This kind of sensory mismatch leads to motion sickness in most people.
The key to getting the essence of a screen shake without any of the motion sickness baggage turned out to be a mix of two things. First, the shake is restricted to peripheral vision only. Second, it is restricted to an “in and out” motion, with no tilting or twisting. The result is a conveyance of concussion and impact that doesn’t rely on shaking the player’s view, at least not in a way that leads to motion sickness. It’s the product of some clever experimentation to solve a problem, and freely downloadable for use by anyone who may be interested.
Speaking of fooling one’s senses in VR environments, here is a fascinating method of simulating zero gravity: waterproof the VR headset and go underwater.
The Unity engine has been around since Apple started using Intel chips, and has made quite a splash in the gaming world. Unity allows developers to create 2D and 3D games, but there are some other interesting applications of this gaming engine as well. For example, [matthewhallberg] used it to build a robot that can map rooms in 3D.
The impetus for this project was a robotics company that used a series of robots around their business. The robots navigate using computer vision, but couldn’t map the rooms from scratch. They hired [matthewhallberg] to tackle this problem, and this robot is a preliminary result. Using the Unity engine and an iPhone, the robot can perform in one of three modes. The first is a user-controlled mode, the second is object following, and the third is 3D mapping.
The robot seems fairly easy to construct and only carries and iPhone, a Node MCU, some motors, and a battery. Most of the computational work is done remotely, with the robot simply receiving its movement commands from another computer. There’s a lot going on here, software-wise, and a lot of toolkits and software packages to install and communicate with one another, but the video below does a good job of showing what you’ll need and how it all works together. If that’s all too much, there are other robots with a form of computer vision that can get you started into the world of computer vision and mapping.
Continue reading “Robot Maps Rooms With Help From IPhone”
The first time I saw 3D modeling and 3D printing used practically was at a hack day event. We printed simple plastic struts to hold a couple of spring-loaded wires apart. Nothing revolutionary as far as parts go but it was the moment I realized the value of a printer.
Since then, I have used OpenSCAD because that is what I saw the first time but the intuitiveness of other programs led me to develop the OpenVectorKB which allowed the ubiquitous vectors in OpenSCAD to be changed at will while keeping the parametric qualities of the program, and even leveraging them.
All three values in a vector, X, Y, and Z, are modified by twisting encoder knobs. The device acts as a keyboard to
- select the relevant value
- replace it with an updated value
- refresh the display
- move the cursor back to the starting point
There is no software to install and it runs off a Teensy-LC so reprogramming it for other programs is possible in any program where rotary encoders may be useful. Additional modes include a mouse, arrow keys, Audacity editing controls, and VLC time searching.
Here’s an article in favor of OpenSCAD and here’s one against it. This article does a good job of explaining OpenSCAD.
Continue reading “Add Intuitiveness To OpenSCAD With Encoders”
In the process of making a homemade Mech Combat game that features robot-like piloted tanks capable of turning the cockpit independent of the direction of movement, [Florian] realized that while the concept was intuitive to humans, implementing it in a VR game had challenges. In short, when the body perceives movement but doesn’t feel the expected acceleration and momentum, motion sickness can result. A cockpit view that changes independently of forward motion exacerbates the issue.
To address this, [Florian] wanted to use a swivel chair to represent turning the Mech’s “hips”. This would control direction of travel and help provide important physical feedback. He was considering a hardware encoder for the chair when he realized he already had one in his pocket: his iPhone.
By making an HTML page that accesses the smartphone’s Orientation API, no app install was needed to send the phone’s orientation to his game via a WebSocket in Unity. He physically swivels his chair to steer and is free to look around using the VR headset, separate from the direction of travel. Want to try it for yourself? Get it from [Florian]’s GitHub repository.
A video is embedded below, but if you’re interested in details be sure to also check out [Florian]’s summary of insights and methods for avoiding motion sickness in a VR Mech cockpit.
Continue reading “VR Mech’s Missing Link: The Phone In Your Pocket”
Imagine yourself riding through the countryside of Tuscany in the morning, then popping over to Champagne for a tour in the evening without taking a plane ride in the intermission. In fact, you don’t have to leave your living room. All you need is a stationary bicycle, a VR headset, and CycleVR.
[Aaron Puzey] hasn’t quite made the inter-country leap quite like that, but he has cycled the entire length of the UK, from its southern point to its northernmost tip. The 1500km journey took 85 hours over the course of eight months to complete.
CycleVR is actually a VR app created using Unity. It takes advantage of Google street view’s panoramic image data, using Bluetooth to monitor the cycling pace and transition between the panorama capture points. So, the static images of pedestrians and cars clipping and distorting as the panorama images load might throw off the illusion at first, but there’s thousands of side streets and country roads out there where this won’t be as pronounced. Check out the highlight reel from [Puzey]’s journey after the break.
Continue reading “Take A Bicycle Tour Anywhere In The World”
An experimental project to mix reality and virtual reality by [Drew Gottlieb] uses the Microsoft Hololens and the HTC Vive to show two users successfully sharing a single workspace as well as controllers. While the VR user draws cubes in midair with a simple app, the Hololens user can see the same cubes being created and mapped to a real-world location, and the two headsets can even interact in the same shared space. You really need to check ou the video, below, to fully grasp how crazy-cool this is.
Two or more VR or AR users sharing the same virtual environment isn’t new, but anchoring that virtual environment into the real world in a way that two very different headsets share is interesting to see. [Drew] says that the real challenge wasn’t just getting the different hardware to talk to each other, it was how to give them both a shared understanding of a common space. [Drew] needed a way to make that work, and you can see the results in the video embedded below.
Continue reading “Sharing Virtual And Holographic Realities Via Vive And Hololens”