Hang Ten With Help From The Surf Window

Unless you live in a special, unique place like Hawaii or Costa Rica it’s unlikely you’ll be able to surf every day. It’s not easy to plan surf sessions or even surf trips to most locations because the weather conditions will need to be just right. Not only the wave height (swell) but also the wind speed and direction, tide, water and air temperature, and even amount and type of marine life present can all impact your surf session. You’ll want something which can easily tell you right away if conditions are good.

This project from [luke] is called the Surf Window shows the surf conditions at the local beach with just one glance. Made out of various pieces of wood, each part represents one of the weather conditions at the beach. A rotating seagull gives the wind direction, for example, and the wave height is represented by 3D, moving waves. All of the parts are connected with various motors and linkages to an Arduino Mega +WiFi R3 which grabs all of its information from Magicseaweed, a surf forecasting site.

The Surf Window can show the current conditions at virtually any surfable beach in the world, so if you really want to know how Jaws, Mavericks, or even Reef Road is breaking right now, you could use this to give you a more nuanced look. Don’t forget to take the correct board for the conditions!

Continue reading “Hang Ten With Help From The Surf Window”

a 3d mesh of a rabbit, and a knit version of the same

Knitting Software Automatically Converts 3D Models Into Machine-knit Stuffies

We’ve seen our fair share of interesting knitting hacks here at Hackaday. There has been a lot of creative space explored while mashing computers into knitting machines and vice versa, but for the most part the resulting knit goods all tend to be a bit… two-dimensional. The mechanical reality of knitting and hobbyist-level knitting machines just tends to lend itself to working with a simple grid of pixels in a flat plane.

However, a team at the [Carnegie Mellon Textiles Lab] have been taking the world of computer-controlled knitting from two dimensions to three, with software that can create knitting patterns for most any 3D model you feed it. Think of it like your standard 3D printing slicer software, except instead of simple layers of thermoplastics it generates complex multi-dimensional chains of knits and purls with yarn and 100% stuffing infill.

The details are discussed and very well illustrated in their paper entitled Automatic Machine Knitting of 3D Meshes and a video (unfortunately not embeddable) shows the software interface in action, along with some of the stuffing process and the final adorable (ok they’re a little creepy too) stuffed shapes.

Since the publication of their paper, [the Textiles Lab] has also released an open-source version of their autoknit software on GitHub. Although the compilation and installation steps look non-trivial, the actual interface seems approachable by a dedicated hobbyist. Anyone comfortable with 3D slicer software should be able to load a model, define the two seams necessary to close the shape, which will need to be manually sewn after stuffing, and output the knitting machine code.

Previous knits: the Knit Universe, Bike-driven Scarf Knitter, Knitted Circuit Board.

Get Great 3D Scans With Open Photogrammetry

Not long ago, photogrammetry — the process of stitching multiple photographs taken from different angles into a 3D whole — was hard stuff. Nowadays, it’s easy. [Mikolas Zuza] over at Prusa Printers, has a guide showing off cutting edge open-source software that’s not only more powerful, but also easier to use. They’ve also produced a video, which we’ve embedded below.

Basically, this is a guide to using Meshroom, which is based on the AliceVision photogrammetry framework. AliceVision is a research platform, so it’s got tremendous capability but doesn’t necessarily focus on the user experience. Enter Meshroom, which makes that power accessible.

Meshroom does all sorts of cool tricks, like showing you how the 3D reconstruction looks as you add more images to the dataset, so that you’ll know where to take the next photo to fill in incomplete patches. It can also reconstruct from video, say if you just walked around the object with a camera running.

The final render is computationally intensive, but AliceVision makes good use of a CUDA on Nvidia graphics cards, so you can cut your overnight renders down to a few hours if you’ve got the right hardware. But even if you have to wait for the results, they’re truly impressive. And best of all, you can get started building up your 3D model library using nothing more than that phone in your pocket.

If you want to know how to use the models that come out of photogrammetry, check out [Eric Strebel]’s video. And if all of this high-tech software foolery is too much for you, try a milk-based 3D scanner.

Continue reading “Get Great 3D Scans With Open Photogrammetry”

Three Dimensions: What Does That Really Mean?

The holy grail of display technology is to replicate what you see in the real world. This means video playback in 3D — but when it comes to displays, what is 3D anyway?

You don’t need me to tell you how far away we are from succeeding in replicating real life in a video display. Despite all the hype, there are only a couple of different approaches to faking those three-dimensions. Let’s take a look at what they are, and why they can call it 3D, but they’re not fooling us into believing we’re seeing real life… yet.

Continue reading “Three Dimensions: What Does That Really Mean?”

Use Nodes To Code Loads Of G-code For 3D CNC Carving

Most CNC workflows start with a 3D model, which is then passed to CAM software to be converted into the G-code language that CNC machines love and understand. G-code, however, is simple enough that rudimentary coding skills are all you need to start writing your very own programmatic CNC tool paths. Any language that can output plain text is fully capable of enabling you to directly control powerful motors and rapidly spinning blades.

[siemenc] shows us how to use Grasshopper – a visual node-based programming system for Rhino 3D – to output G-code that makes some interesting patterns and shapes in wood when fed to a ShopBot. Though the Rhino software is a bit expensive and thus is not too widely available, [siemenc] walks through some background, theory, and procedures that could be useful and inspirational no matter what software or programming language you’re using to create your bespoke G-code.

For links to code and related blog posts, plus more lovely pictures of intricately carved plywood, check out [siemenc]’s personal site as well.

[via Bantam Tools]

Robot Maps Rooms With Help From IPhone

The Unity engine has been around since Apple started using Intel chips, and has made quite a splash in the gaming world. Unity allows developers to create 2D and 3D games, but there are some other interesting applications of this gaming engine as well. For example, [matthewhallberg] used it to build a robot that can map rooms in 3D.

The impetus for this project was a robotics company that used a series of robots around their business. The robots navigate using computer vision, but couldn’t map the rooms from scratch. They hired [matthewhallberg] to tackle this problem, and this robot is a preliminary result. Using the Unity engine and an iPhone, the robot can perform in one of three modes. The first is a user-controlled mode, the second is object following, and the third is 3D mapping.

The robot seems fairly easy to construct and only carries and iPhone, a Node MCU, some motors, and a battery. Most of the computational work is done remotely, with the robot simply receiving its movement commands from another computer. There’s a lot going on here, software-wise, and a lot of toolkits and software packages to install and communicate with one another, but the video below does a good job of showing what you’ll need and how it all works together. If that’s all too much, there are other robots with a form of computer vision that can get you started into the world of computer vision and mapping.

Continue reading “Robot Maps Rooms With Help From IPhone”

3D Printed Tourniquets Are Not A Cinch

Saying that something is a cinch is a way of saying that it is easy. Modeling a thin handle with a hole through the middle seems like it would be a simple task accomplishable in a single afternoon and that includes the time to print a copy or two. We are here to tell you that is only the first task when making tourniquets for gunshot victims. Content warning: there are real pictures of severe trauma. Below, is a video of a training session with the tourniquets in Hayat Center in Gaza and has a simulated wound on a mannequin.

On the first pass, many things are done correctly: the handle is the correct length and diameter, the strap hole fit the strap, and the part is well oriented on the platen. As with many first iterations, it looks good on a screen, but in the real world, we all live under Murphy’s law. In practice, some of the strap holes had sharp edges that cut into the strap, and one of the printed buckles broke unexpectedly.

On the whole, the low cost and availability of the open-source tourniquets outweigh the danger of operating without them. Open-source medical devices are not just for use in the field, they can help with training too. This tourniquet is saving people and proving that modeling skills can be a big help in the real world.
Continue reading “3D Printed Tourniquets Are Not A Cinch”