Hats off to him on the quality of the design. There are two parts that nestle the knob of the thumbstick from either side. He mates those pieces with each other using screws, firmly hugging the stick. Bearings are used at the joints for smooth action of the two servo motors that control the arm. The base of the robotic appendage is zip-tied to the controller itself.
The build targets experimentation with machine learning. Since the computer can control the arm via an Arduino, and the computer has access to metrics of what’s happening in the virtual environment, it’s a perfect for training a neural network. Are you thinking what we’re thinking? This is the beginning of hardware speed-running your favorite video games like [SethBling] did for Super Mario World half a decade ago. It will be more impressive since this would be done by automating the mechanical bit of the controller rather than operating purely in the software realm. You’ll just need to do your own hack to implement button control.
Stereoscopic vision works by having the brain fuse together what both eyes see, and this process is called binocular fusion. The small differences between what each eye sees mostly conveys a sense of depth to us, but DiCE uses some of the quirks of binocular fusion to trick the brain into perceiving enhanced contrast in the visuals. This perceived higher contrast in turn leads to a stronger sense of depth and overall image quality.
To pull off this trick, DiCE displays a different contrast level to both eyes in a way designed to encourage the brain to fuse them together in a positive way. In short, using a separate and different dynamic contrast range for each eye yields an overall greater perceived contrast range in the fused image. That’s simple in theory, but in practice there were a number of problems to solve. Chief among them was the fact that if the difference between what each eyes sees is too great, the result is discomfort due to binocular rivalry. The hard scientific work behind DiCE came from experimentally determining sweet spots, and pre-computing filters independent of viewer and content so that it could be applied in real-time for a consistent result.
Things like this are reminders that we experience the world only through the filter of our senses, and our perception of reality has quirks that can be demonstrated by things like this project and other “sensory fusion” edge cases like the Thermal Grill Illusion, which we saw used as the basis for a replica of the Pain Box from Dune.
Here’s one that proves a hardware project can go beyond blinking LEDs and dumping massive chunks of data onto a serial console. Those practices are fine for some, but [dimtass] has found a more elegant hack for a more civilized age. His 3D Millennium Falcon model gets orientation data from his IMU as an an HID device.
The hardware involved is an MPU6050 6-axis sensor that is interfaced with a Teensy 3.2 board. [dimtass] documents his approach to calibrating the IMU going a bit further by using a Python script to generate offsets. We’ve advocated using Jupyter notebooks in the past and this is a good example of Jupyter plotting the data and visualizing the effect of the offsets in a second pass.
When in action, the Teensy reads IMU data and sends it over a USB RAW HID interface. For the uninitiated, HID transfers are more reliable than USB CDC transfers (virtual serial port) because they use smaller data chunks per event/transaction and usually don’t require special drivers. On the computer side, [dimtass] has written a small application that gets the IMU values over the RAW HID and then provides it to the visualization application.
A 3D Millennium Falcon model is rendered in Unity, the popular open source game development engine. Even though Unity has an API, this particular approach is more OS specific using a shared-memory technique. The HID application writes to a file (/tmp/hid-shared-buffer) which is then read by Unity to make orientation changes to the rendered model.
In first-person games, an effective way to heighten immersion is to give the player a sense of impact and force by figuratively shaking the camera. That’s a tried and true practice for FPS games played on a monitor, but to [Zulubo]’s knowledge, no one has implemented traditional screen shake in a VR title because it would be a sure way to trigger motion sickness. Unsatisfied with that limitation, some clever experimentation led [Zulubo] to a method of doing screen shake in VR that doesn’t cause any of the usual problems.
Screen shake doesn’t translate well to VR because the traditional method is to shake the player’s entire view. This works fine when viewed on a monitor, but in VR the brain interprets the visual cue as evidence that one’s head and eyeballs are physically shaking while the vestibular system is reporting nothing of the sort. This kind of sensory mismatch leads to motion sickness in most people.
The key to getting the essence of a screen shake without any of the motion sickness baggage turned out to be a mix of two things. First, the shake is restricted to peripheral vision only. Second, it is restricted to an “in and out” motion, with no tilting or twisting. The result is a conveyance of concussion and impact that doesn’t rely on shaking the player’s view, at least not in a way that leads to motion sickness. It’s the product of some clever experimentation to solve a problem, and freely downloadable for use by anyone who may be interested.
The Unity engine has been around since Apple started using Intel chips, and has made quite a splash in the gaming world. Unity allows developers to create 2D and 3D games, but there are some other interesting applications of this gaming engine as well. For example, [matthewhallberg] used it to build a robot that can map rooms in 3D.
The impetus for this project was a robotics company that used a series of robots around their business. The robots navigate using computer vision, but couldn’t map the rooms from scratch. They hired [matthewhallberg] to tackle this problem, and this robot is a preliminary result. Using the Unity engine and an iPhone, the robot can perform in one of three modes. The first is a user-controlled mode, the second is object following, and the third is 3D mapping.
The robot seems fairly easy to construct and only carries and iPhone, a Node MCU, some motors, and a battery. Most of the computational work is done remotely, with the robot simply receiving its movement commands from another computer. There’s a lot going on here, software-wise, and a lot of toolkits and software packages to install and communicate with one another, but the video below does a good job of showing what you’ll need and how it all works together. If that’s all too much, there are other robots with a form of computer vision that can get you started into the world of computer vision and mapping.
The first time I saw 3D modeling and 3D printing used practically was at a hack day event. We printed simple plastic struts to hold a couple of spring-loaded wires apart. Nothing revolutionary as far as parts go but it was the moment I realized the value of a printer.
Since then, I have used OpenSCAD because that is what I saw the first time but the intuitiveness of other programs led me to develop the OpenVectorKB which allowed the ubiquitous vectors in OpenSCAD to be changed at will while keeping the parametric qualities of the program, and even leveraging them.
All three values in a vector, X, Y, and Z, are modified by twisting encoder knobs. The device acts as a keyboard to
select the relevant value
replace it with an updated value
refresh the display
move the cursor back to the starting point
There is no software to install and it runs off a Teensy-LC so reprogramming it for other programs is possible in any program where rotary encoders may be useful. Additional modes include a mouse, arrow keys, Audacity editing controls, and VLC time searching.
In the process of making a homemade Mech Combat game that features robot-like piloted tanks capable of turning the cockpit independent of the direction of movement, [Florian] realized that while the concept was intuitive to humans, implementing it in a VR game had challenges. In short, when the body perceives movement but doesn’t feel the expected acceleration and momentum, motion sickness can result. A cockpit view that changes independently of forward motion exacerbates the issue.
To address this, [Florian] wanted to use a swivel chair to represent turning the Mech’s “hips”. This would control direction of travel and help provide important physical feedback. He was considering a hardware encoder for the chair when he realized he already had one in his pocket: his iPhone.
By making an HTML page that accesses the smartphone’s Orientation API, no app install was needed to send the phone’s orientation to his game via a WebSocket in Unity. He physically swivels his chair to steer and is free to look around using the VR headset, separate from the direction of travel. Want to try it for yourself? Get it from [Florian]’s GitHub repository.