Toilet Paper Chase And Indoor Cycling Race With Unity And Arduino

While we’re still far away from returning to a pre-Corona everyday life, people seem to have accepted that toilet paper will neither magically cease to exist, nor become our new global currency. But back at the height of its madness, like most of us, [Jelle Vermandere] found himself in front of empty shelves, and the solution seemed obvious to him: creating a lifelike toilet paper chasing game in hopes to distract the competition.

Using Unity, [Jelle] created a game world of an empty supermarket, with the goal to chase after distribution tubes and collect toilet paper packs into a virtual cart. Inspired by the Wii Wheel, he imitated a shopping cart handle built from — as it appears — a sunshade pole that holds an Arduino and accelerometer in a 3D-printed case as game controller. For an even more realistic feel, he added a sound sensor to the controller, and competing carts to the game, which can be pushed out of the way by simply yelling loud enough. You can witness all of this delightful absurdity in his build video after the break.

From racing shopping carts to racing bicycles

But that’s not all. With the toilet paper situation sorted out, [Jelle] found himself in a different dilemma: a cloud foiled his plans of going for a bicycle ride. In the same manner, he ended up building a cycling racing game, once again with Unity and Arduino. From a 3D-scanned model of himself and his bicycle, to automatically generating tracks on the fly and teaching an AI to ride a bike, [Jelle] clearly doesn’t joke around while he’s joking around.

However, the best part about the game has to be the controller, which is his actual bicycle. Using a magnetic door sensor to detect the speed, and a potentiometer mounted with an obscure Lego construction to the handlebar, it’s at least on par with the shopping cart handle — but judge for yourself in another build video, also attached after the break. The only thing missing now is to level up the difficulty by powering the Arduino with the bicycle itself.

Continue reading “Toilet Paper Chase And Indoor Cycling Race With Unity And Arduino”

Automate Your Xbox

First the robots took our jobs, then they came for our video games. This dystopian future is brought to you by [Little French Kev] who designed this adorable 3D-printed robot arm to interface with an Xbox One controller joystick. He shows it off in the video after the break, controlling a ball-balancing physics demonstration written in Unity.

Hats off to him on the quality of the design. There are two parts that nestle the knob of the thumbstick from either side. He mates those pieces with each other using screws, firmly hugging the stick. Bearings are used at the joints for smooth action of the two servo motors that control the arm. The base of the robotic appendage is zip-tied to the controller itself.

The build targets experimentation with machine learning. Since the computer can control the arm via an Arduino, and the computer has access to metrics of what’s happening in the virtual environment, it’s a perfect for training a neural network. Are you thinking what we’re thinking? This is the beginning of hardware speed-running your favorite video games like [SethBling] did for Super Mario World half a decade ago. It will be more impressive since this would be done by automating the mechanical bit of the controller rather than operating purely in the software realm. You’ll just need to do your own hack to implement button control.

Continue reading “Automate Your Xbox”

Tricking The Brain Into Seeing Boosted Contrast In Stereo Imagery

Last year a team of researchers published a paper detailing a method of boosting visual contrast and image quality in stereoscopic displays. The method is called Dichoptic Contrast Enhancement (DiCE) and works by showing each eye a slightly different version of an image, tricking the brain into fusing the two views together in a way that boosts perceived image quality. This only works on stereoscopic displays like VR headsets, but it’s computationally simple and easily implemented. This trick could be used to offset some of the limitations of displays used in headsets, for example making them appear capable of deeper contrast levels than they can physically deliver. This is good, because higher contrasts are generally perceived as being more realistic and three-dimensional; important factors in VR headsets and other stereoscopic displays.

Stereoscopic vision works by having the brain fuse together what both eyes see, and this process is called binocular fusion. The small differences between what each eye sees mostly conveys a sense of depth to us, but DiCE uses some of the quirks of binocular fusion to trick the brain into perceiving enhanced contrast in the visuals. This perceived higher contrast in turn leads to a stronger sense of depth and overall image quality.

Example of DiCE-processed images, showing each eye a different dynamic contrast range. The result is greater perceived contrast and image quality when the brain fuses the two together.

To pull off this trick, DiCE displays a different contrast level to both eyes in a way designed to encourage the brain to fuse them together in a positive way. In short, using a separate and different dynamic contrast range for each eye yields an overall greater perceived contrast range in the fused image. That’s simple in theory, but in practice there were a number of problems to solve. Chief among them was the fact that if the difference between what each eyes sees is too great, the result is discomfort due to binocular rivalry. The hard scientific work behind DiCE came from experimentally determining sweet spots, and pre-computing filters independent of viewer and content so that it could be applied in real-time for a consistent result.

Things like this are reminders that we experience the world only through the filter of our senses, and our perception of reality has quirks that can be demonstrated by things like this project and other “sensory fusion” edge cases like the Thermal Grill Illusion, which we saw used as the basis for a replica of the Pain Box from Dune.

A short video overview of the method is embedded below, and a PDF of the publication can be downloaded for further reading. Want a more hands-on approach? The team even made a DiCE plugin (freely) available from the Unity asset store.

Continue reading “Tricking The Brain Into Seeing Boosted Contrast In Stereo Imagery”

Millenium Falcon HID: Get Unity To Talk To Teensy

Here’s one that proves a hardware project can go beyond blinking LEDs and dumping massive chunks of data onto a serial console. Those practices are fine for some, but [dimtass] has found a more elegant hack for a more civilized age. His 3D Millennium Falcon model gets orientation data from his IMU as an an HID device.

The hardware involved is an MPU6050 6-axis sensor that is interfaced with a Teensy 3.2 board. [dimtass] documents his approach to calibrating the IMU going a bit further by using a Python script to generate offsets. We’ve advocated using Jupyter notebooks in the past and this is a good example of Jupyter plotting the data and visualizing the effect of the offsets in a second pass.

When in action, the Teensy reads IMU data and sends it over a USB RAW HID interface. For the uninitiated, HID transfers are more reliable than USB CDC transfers (virtual serial port) because they use smaller data chunks per event/transaction and usually don’t require special driversOn the computer side, [dimtass] has written a small application that gets the IMU values over the RAW HID and then provides it to the visualization application.

A 3D Millennium Falcon model is rendered in Unity, the popular open source game development engine. Even though Unity has an API, this particular approach is more OS specific using a shared-memory technique. The HID application writes to a file (/tmp/hid-shared-buffer) which is then read by Unity to make orientation changes to the rendered model.

[dimtass] provides lots of details on the tools used to bring his project to life and it can be a great starting point for more projects that need interfacing sensors with a visualization system. We have seen ways to turn a person’s head into a joystick and if you need a deeper dive into Unity, look no further.

Continue reading “Millenium Falcon HID: Get Unity To Talk To Teensy”

Screen Shake In VR, Minus The Throwing Up

In first-person games, an effective way to heighten immersion is to give the player a sense of impact and force by figuratively shaking the camera. That’s a tried and true practice for FPS games played on a monitor, but to [Zulubo]’s knowledge, no one has implemented traditional screen shake in a VR title because it would be a sure way to trigger motion sickness. Unsatisfied with that limitation, some clever experimentation led [Zulubo] to a method of doing screen shake in VR that doesn’t cause any of the usual problems.

Screen shake doesn’t translate well to VR because the traditional method is to shake the player’s entire view. This works fine when viewed on a monitor, but in VR the brain interprets the visual cue as evidence that one’s head and eyeballs are physically shaking while the vestibular system is reporting nothing of the sort. This kind of sensory mismatch leads to motion sickness in most people.

The key to getting the essence of a screen shake without any of the motion sickness baggage turned out to be a mix of two things. First, the shake is restricted to peripheral vision only. Second, it is restricted to an “in and out” motion, with no tilting or twisting. The result is a conveyance of concussion and impact that doesn’t rely on shaking the player’s view, at least not in a way that leads to motion sickness. It’s the product of some clever experimentation to solve a problem, and freely downloadable for use by anyone who may be interested.

Speaking of fooling one’s senses in VR environments, here is a fascinating method of simulating zero gravity: waterproof the VR headset and go underwater.

[via Reddit]

Robot Maps Rooms With Help From IPhone

The Unity engine has been around since Apple started using Intel chips, and has made quite a splash in the gaming world. Unity allows developers to create 2D and 3D games, but there are some other interesting applications of this gaming engine as well. For example, [matthewhallberg] used it to build a robot that can map rooms in 3D.

The impetus for this project was a robotics company that used a series of robots around their business. The robots navigate using computer vision, but couldn’t map the rooms from scratch. They hired [matthewhallberg] to tackle this problem, and this robot is a preliminary result. Using the Unity engine and an iPhone, the robot can perform in one of three modes. The first is a user-controlled mode, the second is object following, and the third is 3D mapping.

The robot seems fairly easy to construct and only carries and iPhone, a Node MCU, some motors, and a battery. Most of the computational work is done remotely, with the robot simply receiving its movement commands from another computer. There’s a lot going on here, software-wise, and a lot of toolkits and software packages to install and communicate with one another, but the video below does a good job of showing what you’ll need and how it all works together. If that’s all too much, there are other robots with a form of computer vision that can get you started into the world of computer vision and mapping.

Continue reading “Robot Maps Rooms With Help From IPhone”

Add Intuitiveness To OpenSCAD With Encoders

The first time I saw 3D modeling and 3D printing used practically was at a hack day event. We printed simple plastic struts to hold a couple of spring-loaded wires apart. Nothing revolutionary as far as parts go but it was the moment I realized the value of a printer.

Since then, I have used OpenSCAD because that is what I saw the first time but the intuitiveness of other programs led me to develop the OpenVectorKB which allowed the ubiquitous vectors in OpenSCAD to be changed at will while keeping the parametric qualities of the program, and even leveraging them.

All three values in a vector, X, Y, and Z, are modified by twisting encoder knobs. The device acts as a keyboard to

  1. select the relevant value
  2. replace it with an updated value
  3. refresh the display
  4. move the cursor back to the starting point

There is no software to install and it runs off a Teensy-LC so reprogramming it for other programs is possible in any program where rotary encoders may be useful. Additional modes include a mouse, arrow keys, Audacity editing controls, and VLC time searching.

Here’s an article in favor of OpenSCAD and here’s one against it. This article does a good job of explaining OpenSCAD.

Continue reading “Add Intuitiveness To OpenSCAD With Encoders”