Lost In Space Gets 3D Printing Right

When it has become so common for movies and television to hyper-sensationalize engineering, and to just plain get things wrong, here’s a breath of fresh air. There’s a Sci-Fi show out right now that wove 3D printing into the story line in a way that is correct, unforced, and a fitting complement to that fictional world.

With the amount of original content Netflix is pumping out anymore, you may have missed the fact that they’ve recently released a reboot of the classic Lost in Space series from the 1960’s. Sorry LeBlanc fans, this new take on the space traveling Robinson family pretends the 1998 movie never happened, as have most people. It follows the family from their days on Earth until they get properly lost in space as the title would indicate, and is probably most notable for the exceptional art direction and special effects work that’s closer to Interstellar than the campy effects of yesteryear.

But fear not, Dear Reader. This is not a review of the show. To that end, I’ll come right out and say that Lost in Space is overall a rather mediocre show. It’s certainly gorgeous, but the story lines and dialog are like something out of a fan film. It’s overly drawn out, and in the end doesn’t progress the overarching story nearly as much as you’d expect. The robot is pretty sick, though.

No, this article is not about the show as a whole. It’s about one very specific element of the show that was so well done I’m still thinking about it a month later: its use of 3D printing. In Lost in Space, the 3D printer aboard the Jupiter 2 is almost a character itself. Nearly every member of the main cast has some kind of interaction with it, and it’s directly involved in several major plot developments during the season’s rather brisk ten episode run.

I’ve never seen a show or movie that not only featured 3D printing as such a major theme, but that also did it so well. It’s perhaps the most realistic portrayal of 3D printing to date, but it’s also a plausible depiction of what 3D printing could look like in the relatively near future. It’s not perfect by any means, but I’d be exceptionally interested to hear if anyone can point out anything better.

Continue reading “Lost In Space Gets 3D Printing Right”

SiFive Releases Smaller, Lower Power RISC-V Cores

Today, SiFive has released two new cores designed for the lower end of computing. This adds to the company’s existing portfolio of microcontrollers and SoCs based on the Open RISC-V ISA. Over the last two years, SiFive has introduced a number of cores based on the RISC-V ISA, an Open Architecture ISA that gives anyone to design and develop a microcontroller or microprocessor platform. These two new cores fill out the low-power end of SiFive’s core portfolio.

The two new cores included in the announcement are the SiFive E20 and E21, both meant for low-power applications, and according to SiFive presentations, they’re along the lines of an ARM Cortex-M0+ and ARM Cortex-M4. This is a core — it’s not a chip yet — but since the introduction of SiFive’s first microcontrollers, many companies have jumped on the RISC-V bandwagon. Western Digital, for example, has committed to using the RISC-V architecture in SoCs and as controllers for hard drive, SSDs, and NASes.

The first chip from SiFive was the HiFive 1, which was based on the SiFive E31 CPU. We got our hands on the HiFive 1 early last year, and it is a beast. With the standard complement of benchmarks, in terms of raw power, it’s approximately twice as fast as the Teensy 3.6, based on the Kinetis K66, a 180 MHz ARM Cortex-M4F. The SiFive E31 is about 1.5 times as fast as the Teensy 3.6 on a pure calculations per clock basis. This is remarkable because the Teensy 3.6 is our go-to standard for when you want to toggle pins really really fast with a cheap, readily available microcontroller platform.

But sometimes you don’t need the fastest or best microcontroller. To that end, SiFive is looking toward a lower-power microcontroller based on the RISC-V core. The new offerings are built on the E2 Core IP series, with two standard cores. The E21 core provides mainstream performance for microcontrollers, and the E20 core is the most power-efficient core offered by SiFive. In effect, the E21 core is a replacement for the ARM Cortex-M3 and Cortex-M4, while the E20 is a replacement for the ARM Cortex-M0+.

Just a few months ago, SiFive released a gigantic, multicore, Linux-capable processor called the HiFive Unleashed. With support for DDR4 and Gigabit Ethernet, this chip would be more at home in a desktop than an Internet of Things thing. The most popular engine ever produced isn’t a seven-liter turbo diesel, it’s whatever goes into a Honda econobox; likewise, many more low-power microcontrollers like the Cortex-M0 and -M3 are sold than the newer, more powerful, and more expensive chips. Even though it’s not as exciting as a new workstation CPU, the world needs microcontrollers, and the more Open, the better.

The Electric Vehicles Of EMF Camp

There is joy in the hearts of British and European hardware and software hackers and makers, for this is an EMF Camp year. Every couple of years, our community comes together for three summer days in a field somewhere, and thanks to a huge amount of work from its organizers and a ton of volunteers, enjoys an entertaining, stimulating, and engrossing hacker camp.

One of the features of a really good hacker camp are the electric vehicles. Not full-on electric cars, but personal camp transport. Because only the technically inept walk, right? From Hitchin’s Big Hak to TOG’s duck, with an assortment of motorized armchairs and beer crates thrown in, these allow the full creativity of the hardware community free rein through the medium of overdriven motors and cheap Chinese motor controllers.

This year at EMF Camp there will be an added dimension that should bring out a new wave of vehicles, there will be a Hacky Racers event. Novelty electric vehicles will compete for on-track glory, will parade around the camp, and will no doubt also sometimes release magic smoke. There is still plenty of time to enter, so if you’re going to EMF, get building!

We have an interest in these little electric vehicles, not least because there may well be a Hackaday-branded machine on the tarmac. We’d like to feature some of them over the weeks running up to the event, so if you are building one and have a write-up handy, please tell us about it in the comments. Charge your batteries, and we’ll see you there!

Header image: [Mark Mellors], with permission.

Nvidia Transforms Standard Video Into Slow Motion Using AI

Nvidia is back at it again with another awesome demo of applied machine learning: artificially transforming standard video into slow motion – they’re so good at showing off what AI can do that anyone would think they were trying to sell hardware for it.

Though most modern phones and cameras have an option to record in slow motion, it often comes at the expense of resolution, and always at the expense of storage space. For really high frame rates you’ll need a specialist camera, and you often don’t know that you should be filming in slow motion until after an event has occurred. Wouldn’t it be nice if we could just convert standard video to slow motion after it was recorded?

That’s just what Nvidia has done, all nicely documented in a paper. At its heart, the algorithm must take two frames, and artificially create one or more frames in between. This is not a manual algorithm that interpolates frames, this is a fully fledged deep-learning system. The Convolutional Neural Network (CNN) was trained on over a thousand videos – roughly 300k individual frames.

Since none of the parameters of the CNN are time-dependent, it’s possible to generate as many intermediate frames as required, something which sets this solution apart from previous approaches.  In some of the shots in their demo video, 30fps video is converted to 240fps; this requires the creation of 7 additional frames for every pair of consecutive frames.

The video after the break is seriously impressive, though if you look carefully you can see the odd imperfection, like the hockey player’s skate or dancer’s arm. Deep learning is as much an art as a science, and if you understood all of the research paper then you’re doing pretty darn well. For the rest of us, get up to speed by wrapping your head around neural networks, and trying out the simplest Tensorflow example.

Continue reading “Nvidia Transforms Standard Video Into Slow Motion Using AI”