In a world where ninjas no longer rule the social hierarchy, where can a ninja-wannabe practice their sword fighting skills? In the popular Introduction to Embedded Systems class at the Massachusetts Institute of Technology, a team of students made their own version of the popular mobile game Fruit Ninja with a twist – you’re fighting your true nemesis, vegetables.
An onboard ESP32 microcontroller and IMU track the sword movements. The game begins by calibrating the sword movements within the play area. Information is generated using the Madgwick algorithm, a 9-degrees-of-freedom algorithm that uses 3-axis data from the sword’s gyroscope, accelerometer, and magnetometer and outputs the absolute orientation of the sword.
The sword and browser both connect to the same channel on the server through a WebSocket connection, identified by a session ID similar to how web chat rooms are implemented. A statistics server manages the allocation of session IDs and other persistent game data to track high scores.
As for the graphics, a Three.js WebGL library creates the scene and camera, loading the game into the browser’s animation frame. Other scripts load the 3D models for the fruits and vegetables in the game, update their positions based on the physics engine provided by Cannon.js, and render UI elements within the game.
Curious? The project site has the microcontroller code to build your own sword that you can use to play the demo. If you don’t have an ESP32 and accelerometer handy you can play Vegetable Assassin in your browser instead.
With an ever-growing range of smart-home products available, all with their own hubs, protocols, and APIs, we see a lot of DIY projects (and commercial offerings too) which aim to provide a “single universal interface” to different devices and services. Usually, these projects allow you to control your home using a list of devices, or sometimes a 2D floor plan. [Wassim]’s project aims to take the first steps in providing a 3D interface, by creating an interactive smart-home controller in the browser.
Note: this isn’t just a rendered image of a 3D scene which is static; this is an interactive 3D model which can be orbited and inspected, showing information on lights, heaters, and windows. The project is well documented, and the code can be found on GitHub. The tech works by taking 3D models and animations made in Blender, exporting them using the .glTF format, then visualising them in the browser using three.js. This can then talk to Hue bulbs, power meters, or whatever other devices are required. The technical notes on this project may well be useful for others wanting to use the Blender to three.js/browser workflow, and include a number of interesting demos of isolated small key concepts for the project.
We notice that all the meshes created in Blender are very low-poly; is it possible to easily add subdivision surface modifiers or is it the vertex count deliberately kept low for performance reasons?
This isn’t our first unique home automation interface, we’ve previously written about shAIdes, a pair of AI-enabled glasses that allow you to control your devices just by looking at them. And if you want to roll your own home automation setup, we have plenty of resources. The Hack My House series contains valuable information on using Raspberry Pis in this context, we’ve got information on picking the right sensors, and even enlisting old routers for the cause.
Touch screens are nice — we still can’t live without a keyboard but they suffice when on the go. But it is becoming obvious that the end goal with user interface techniques is to completely remove the need to touch a piece of hardware in order to interact with it. One avenue for this goal is the use of voice commands via software like Siri, but another is the use of 3D processing hardware like Kinect or Leap Motion. This project uses the latter to control the image shown on the 3D display.
Continue reading “3D Display Controlled With The Leap Motion”