3D ASCII art

Online Tool Turns STLs Into 3D ASCII Art

If you look hard enough, most of the projects we feature on these pages have some practical value. They may seem frivolous, but there’s usually something that compelled the hacker to commit time and effort to its doing. That doesn’t mean we don’t get our share of just-for-funsies projects, of course, which certainly describes this online 3D ASCII art generator.

But wait — maybe that’s not quite right. After all, [Andrew Sink] put a lot of time into the code for this, and for its predecessor, his automatic 3D low-poly generator. That project led to the current work, which like before takes an STL model as input, this time turning it into an ASCII art render. The character set used for shading the model is customizable; with the default set, the shading is surprisingly good, though. You can also swap to a black-on-white theme if you like, navigate around the model with the mouse, and even export the ASCII art as either a PNG or as a raw text file, no doubt suitable to send to your tractor-feed printer.

[Andrew]’s code, which is all up on GitHub, makes liberal use of the three.js library, so maybe stretching his 3D JavaScript skills is really the hidden practical aspect of this one. Not that it needs one — we think it’s cool just for the gee-whiz factor.

Continue reading “Online Tool Turns STLs Into 3D ASCII Art”

Tired Of Fruit Ninja? Try Vegetable Assassin Using An ESP32 Sword

In a world where ninjas no longer rule the social hierarchy, where can a ninja-wannabe practice their sword fighting skills? In the popular Introduction to Embedded Systems class at the Massachusetts Institute of Technology, a team of students made their own version of the popular mobile game Fruit Ninja with a twist – you’re fighting your true nemesis, vegetables.

Vegetable Assassin allows single or multi-player mode, with players slicing vegetables on a screen using fake swords with sensors to detect the players’ motion. The web-based game allows swords to communicate their orientation to the game session with a WebSocket connection to a server, with the game generated and rendered using a 3D client JavaScript library. Rather than using MQTT, which also uses a persistent TCP connection as well as lower overhead, WebSocket provided maximum browser support.

An onboard ESP32 microcontroller and IMU track the sword movements. The game begins by calibrating the sword movements within the play area. Information is generated using the Madgwick algorithm, a 9-degrees-of-freedom algorithm that uses 3-axis data from the sword’s gyroscope, accelerometer, and magnetometer and outputs the absolute orientation of the sword.

The sword and browser both connect to the same channel on the server through a WebSocket connection, identified by a session ID similar to how web chat rooms are implemented. A statistics server manages the allocation of session IDs and other persistent game data to track high scores.

As for the graphics, a Three.js WebGL library creates the scene and camera, loading the game into the browser’s animation frame. Other scripts load the 3D models for the fruits and vegetables in the game, update their positions based on the physics engine provided by Cannon.js, and render UI elements within the game.

Curious? The project site has the microcontroller code to build your own sword that you can use to play the demo. If you don’t have an ESP32 and accelerometer handy you can play Vegetable Assassin in your browser instead.

The Smart Home Gains An Extra Dimension

With an ever-growing range of smart-home products available, all with their own hubs, protocols, and APIs, we see a lot of DIY projects (and commercial offerings too) which aim to provide a “single universal interface” to different devices and services. Usually, these projects allow you to control your home using a list of devices, or sometimes a 2D floor plan. [Wassim]’s project aims to take the first steps in providing a 3D interface, by creating an interactive smart-home controller in the browser.

Note: this isn’t just a rendered image of a 3D scene which is static; this is an interactive 3D model which can be orbited and inspected, showing information on lights, heaters, and windows. The project is well documented, and the code can be found on GitHub. The tech works by taking 3D models and animations made in Blender, exporting them using the .glTF format, then visualising them in the browser using three.js. This can then talk to Hue bulbs, power meters, or whatever other devices are required. The technical notes on this project may well be useful for others wanting to use the Blender to three.js/browser workflow, and include a number of interesting demos of isolated small key concepts for the project.

We notice that all the meshes created in Blender are very low-poly; is it possible to easily add subdivision surface modifiers or is it the vertex count deliberately kept low for performance reasons?

This isn’t our first unique home automation interface, we’ve previously written about shAIdes, a pair of AI-enabled glasses that allow you to control your devices just by looking at them. And if you want to roll your own home automation setup, we have plenty of resources. The Hack My House series contains valuable information on using Raspberry Pis in this context, we’ve got information on picking the right sensors, and even enlisting old routers for the cause.

3D Display Controlled With The Leap Motion

3d-display-controlled-with-leap-motion

Touch screens are nice — we still can’t live without a keyboard but they suffice when on the go. But it is becoming obvious that the end goal with user interface techniques is to completely remove the need to touch a piece of hardware in order to interact with it. One avenue for this goal is the use of voice commands via software like Siri, but another is the use of 3D processing hardware like Kinect or Leap Motion. This project uses the latter to control the image shown on the 3D display.

[Robbie Tilton] generated a 3D image using Three.js, a JavaScript 3D library. The images are made to appear as if floating in air using a pyramid of acrylic which reflects the light toward the viewer’s eyes without blocking out ambient light in the room. In the past we’ve referred to this as a volumetric display. But [Robbie] points out that this actually uses the illusion called Pepper’s Ghost. It’s not really volumetric because the depth is merely an illusion. Moving your point of view won’t change your perspective unless you go around the corner to the next piece of acrylic. But it’s still a nice effect. See for yourself in the demo after the jump.

Continue reading “3D Display Controlled With The Leap Motion”