Inceptionism: Mind Blown by What Neural Nets Think They See

Dr. Robert Hecht-Nielsen, inventor of one of the first neurocomputers, defines a neural network as:

“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

These ‘processing elements’ are generally arranged in layers – where you have an input layer, an output layer and a bunch of layers in between. Google has been doing a lot of research with neural networks for image processing. They start with a network 10 to 30 layers thick. One at a time, millions of training images are fed into the network. After a little tweaking, the output layer spits out what they want – an identification of what’s in a picture.

The layers have a hierarchical structure. The input layer will recognize simple line segments. The next layer might recognize basic shapes. The one after that might recognize simple objects, such as a wheel. The final layer will recognize whole structures, like a car for instance. As you climb the hierarchy, you transition from fast changing low level patterns to slow changing high level patterns. If this sounds familiar, we’ve talked out about it before.

Now, none of this is new and exciting. We all know what neural networks are and do. What is going to blow your knightmind, however, is a simple question Google asked, and the resulting answer. To better understand the process, they wanted to know what was going on in the inner layers. They feed the network a picture of a truck, and out comes the word “truck”. But they didn’t know exactly how the network came to its conclusion. To answer this question, they showed the network an image, and then extracted what the network was seeing at different layers in the hierarchy. Sort of like putting a serial.print in your code to see what it’s doing.

They then took the results and had the network enhance what it thought it detected. Lower levels would enhance low level features, such as lines and basic shapes. The higher levels would enhance actual structures, such as faces and trees. ibisThis technique gives them the level of abstraction for different layers in the hierarchy and reveals its primitive understanding of the image. They call this process inceptionism.

 

Be sure to check out the gallery of images produced by the process. Some have called the images dream like, hallucinogenic and even disturbing. Does this process reveal the inner workings of our mind? After all, our brains are indeed neural networks. Has Google unlocked the mind’s creative process?  Or is this just a neat way to make computer generated abstract art.

So here comes the big question: Is it the computer chosing these end-product photos or a google engineer pawing through thousands (or orders of magnitude more) to find the ones we will all drool over?

Ask Hackaday (And Adafruit): The New CEO Of MakerBot

Just a few years ago, MakerBot was the darling of the Open Hardware community. Somehow, in the middle of a garage in Brooklyn, a trio of engineers and entrepreneurs became a modern-day Prometheus, capturing a burgeoning technology into a compact, easy to use, and intoxicating product. A media darling was created, a disruptive technology was popularized, and an episode of the Colbert Report was taped.

The phrase ‘meteoric rise’ doesn’t make sense, and since then the reputation of MakerBot has fallen through the floor, crashed through the basement, and is now lodged in one of the higher circles of hell. It’s not surprising; MakerBot took creations from their 3D object hosting site, Thingiverse, and patented them. The once-Open Source line of 3D printers was locked up behind a closed license. The new MakerBot extruder – the Smart Extruder – is so failure prone MakerBot offers a three pack, just so you’ll always have a replacement on hand. False comparisons to Apple abound; Apple contributes to Open Source projects. The only other way for a company to lose the support of the community built around it so quickly would be a name change to Puppy Kickers, LLC.

In the last few months, figurehead CEO of MakerBot [Bre Pettis] was released from contractual obligations, and MakerBot’s parent company, Stratasys, has filled the executive ranks with more traditional business types. It appears PR and Marketing managers have noticed the bile slung at their doorstep, and now MakerBot is reaching out to the community. Their new CEO, [Jonathan Jaglom] specifically requested a hot seat be built at Adafruit for an open discussion and listening meeting. Yes, this means Makerbot is trying to get back on track, winning the hearts and minds of potential customers, and addressing issues Internet forums repeat ad nauseam.

If you’ve ever wanted to ask a CEO how they plan to stop screwing things up, this is your chance. Adafruit is looking for some direction for their interview/listening meeting, and they’re asking the community for the most pressing issues facing the 3D printing community, the Open Source community, and MakerBot the company.

Already on the docket are questions about MakerBot and Open Source, MakerBot’s desire to put DRM in filament, the horrors of the Smart Extruder and the 5th generation MakerBots, problems with Thingiverse, and the general shitty way MakerBot treats its resellers.

This isn’t all Adafruit wants to ask; the gloves are off, nothing is off the table, and they’re looking for questions from the community. What would you like to ask the MakerBot CEO?

Personally, the best interview questions are when the interviewee’s own words are turned around on them. By [Jonathan Jaglom]’s own admission, the barrier to entry for 3D design work has been substantially lowered in the last three years, ostensibly because of incredible advances in Open Source projects. Following this, do MakerBot and Stratasys owe a debt to Open Source projects, and should Stratasys contribute to the rising tide of Open Source development?

That’s just one question. There will, of course, be many more. Leave them down in the comments. “You are not [Tim Cook],” while a valid statement in many respects, is not a question.

Ask Hackaday: Long Endurance Quadcopter

Quadcopters are useful little flying machines. They can be used in all sorts of applications, from mapping, to inspecting long pipelines, to border surveillance, or simply for fun. They all have one thing in common, however – a relatively short battery life. Because quadcopters use brute force to churn through the air, they require a lot of energy. More energy for longer flights means more batteries. More batteries means more weight to carry, which requires even more energy. If you want longer flight times, something has to change. Or does it?

A small start-up company called Horizon Unmanned Systems based out of Singapore claims their quadcopter can fly for up to four hours on a single charge, or up to two and a half hours carrying a 2.2 pound load. They claim to be able to pull this off with a novel approach. First, they fill the hollow frame of the quadcopter with hydrogen gas. They use that gas to power a cute little miniaturized fuel cell LiPo battery hybrid gizmo. And that’s about it. The rest is just standard quadcopter stuff.

The secret to all of this is the miniaturized fuel cell, and how it works. Unfortunately, this is as close as we’re going to get (pdf) for a datasheet. Fuel cells are nifty devices that take hydrogen and oxygen and convert them into water, along with electricity. While that sounds simple, making one is not. And making a miniature one light enough for a quadcopter is down right hard.

How would you increase the flight time of quadcopters? Fuel cells are a great idea, but is this technology within the reach of the modern hacker? We’ve seen people make them from scraps out of a junkyard, but how would you miniaturize it and make it light enough to be used as a practical power supply for a quadcopter?

Thanks to [Joseph Rautenbach] for the tip!

Ask Hackaday: The Internet of Things and the Coming Age of Big Data

Samsung has thrown its hat into the Internet of Things ring with its ARTIK platform. Consisting of three boards, each possesses a capability proportional to their size. The smallest comes in at just 12x12mm, but still packs a dual core processor running at 250MHz on top of 5 MB flash with bluetooth.  The largest is 29x39mm and sports a 1.3GHz ARM, 18 gigs of memory and an array of connectivity. The ARTIK platform is advertised to be completely compatible with the Arduino platform.

Each of these little IoT boards is also equipped with Samsung’s Secure Element. Worthy of an article on its own, this crypto hardware appears to be built into the processor, and supports several standards. If you dig deep enough, you’ll find the preliminary datasheet (PDF) to each of these boards. It is this Secure Element thing that separates the ARTIK platform from the numerous other IoT devices that have crossed our memory banks, and brings forth an interesting question. With the age of the Internet of Things upon us, how do we manage all of that data while keeping it secure and private?

What is The Internet of Things?

These kind of terms get thrown around too much. It was just the other day I was watching television and heard someone talk about ‘hacking’ their dinner. Really? Wiki defines the IoT as –

“a network of physical objects or “things” embedded with electronics, software, sensors and connectivity to enable it to achieve greater value and service by exchanging data with the manufacturer, operator and/or other connected devices.”

Let’s paint a realistic picture of this. Imagine your toaster, shower head, car and TV were equipped with little IoT boards, each of which connects to your personal network. You walk downstairs, put the toast in the toaster, and turn on the TV to catch the morning traffic. A little window pops up and tells you the temperature outside, and asks if you want it to start your car and turn on the air conditioning. You select “yes”, but not before you get a text message saying your toast is ready. Meanwhile, your daughter is complaining the shower stopped working, making you remind her that you’ve programmed it to use only so much water per shower, and that there is a current clean water crisis in the country.

This is the future we all have to look forward to. A future that we will make. Why? Because we can. But this future with its technical advancements does not come without problems. We’ve already seen how malicious hackers can interfere with these IoT devices in not so friendly ways.

Is it possible for our neighbor’s teenage kid to hack into our shower head? Could she turn our toaster on when we’re not home? Or even start our car? Let’s take this even further – could the government monitor the amount of time you spend in the shower? The amount of energy your toaster uses? The amount of time you let your car idle?

Clearly, the coming age of the Internet of Things doesn’t look as nice when we lose the rose colored glasses. The question is how do we shape our future connected lives in a way that is secure and private? If closed source companies like Samsung get their IoT technology into our everyday household items, would you bet a pallet of Raspberry Pi’s that the government will mine them for data?

This, however, does not have to happen. This future is ours. We made it. We know how it works – down to the ones and zeros. There is no fate, except that which we make. Can we make the coming IoT revolution open source? Because if we can, our community will be able to help ensure safety and privacy and keep our personal data out of the government’s hands. If we cannot, and the closed source side of things wins, we’ll have no choice but to dig in and weed out the vulnerabilities the hard way. So keep your soldering irons sharp and your bus pirates calibrated. There’s a war brewing.

Ask Hackaday: Fixing Your Tractor Could Land You Behind Bars

It’s 9AM on any given Sunday. You can be found in your usual spot – knee-deep in wires and circuit boards. The neighbor’s barking dog doesn’t grab your attention as you pry the cover off of a cell phone, but the rustling of leaves by the back door does. Seconds later, several heavily armed SWAT officers bust in and storm your garage. You don’t have time to think as they throw your down on the cold, hard concrete floor. You’re gripped by a sharp stinging pain as one of the officers puts his knee in the square of your back. Seconds later, you’re back on your feet being lead to the back of an awaiting police cruiser. You catch the gaze of one of your neighbors and wonder what they might be thinking as your inner voice squeaks: “What did I do wrong?”

The answer to this question would come soon enough. Your crime – hacking your dad’s tractor.

“That’s like saying locking up books will inspire kids to be innovative writers, because they won’t be tempted to copy passages from a Hemingway novel.”

-Kyle Wiens

John Deere is trying to convince the Copyright Office that farmers don’t really own the tractors they buy from them. They argue that the computer code that runs the systems is not for sale, and that purchasers of the hardware are simply receiving “an implied license for the life of the vehicle to operate the vehicle.”

In order to modify or “hack” any type of software, you have to copy it first. Companies don’t like the copying thing, so many put locks in place to prevent this. But because hackers are hackers, we can easily get past their childish attempts to keep code and information out of our hands. So now they want to make it illegal. John Deere is arguing that if it is legal for hackers to copy and modify their software, that it could lead to farmers listening to pirated music while plowing a corn field. No I am not making this up — dig into this 25-page facepalm-fest (PDF) written by John Deere and you’ll be just as outraged.

Trying to keep hackers out using the DMCA act is not new. Many companies argue that locking hackers out helps to spur innovation. When in fact the opposite is true. What about 3D printers, drones, VR headsets…all from us! The Copyright Office, after holding a hearing and reading comments, will make a decision in July on whether John Deere’s argument has any merit.

Let us know what you think about all this. Can hackers and the free market learn to live in harmony? We just want to fix our tractor!

Thanks to [Malachi] for the tip!

Ask Hackaday: Is Amazon Echo the Future of Home Automation?

Unless you’ve been living under a case of 1 farad capacitors, you’ve heard of the Amazon Echo. Roughly the size of two cans of beans, the Echo packs quite a punch for such a small package. It’s powered by a Texas Instrument DM3725 processor riding on 256 megs of RAM and 4 gigs of SanDisk iNAND ultra flash memory. Qualcomm Atheros takes care of the WiFi and Bluetooth, and various TI chips take care of the audio codecs and amplifiers.

What’s unique about Echo is its amazing voice recognition. While the “brains” of the Echo exist somewhere on the Internets, the hardware for this circuitry is straight forward. Seven, yes seven microphones are positioned around the top of the device. They feed into four Texas Instrument 92dB SNR low-power stereo ADCs. The hardware and software make for a very capable voice recognition that works from anywhere in the room. For the output sound, two speakers are utilized – a woofer and a tweeter. They’re both powered via a TI 15 watts class D amplifier. Check out this full tear down for more details of the hardware.

circuit board

Now that we have a good idea of the hardware, we have to accept the bad news that this is a closed source device. While we’ve seen other hacks where people poll the to-do list through the unofficial API, it still leaves a lot to be desired. For instance, the wake word, or the word which signals the Echo to start listening to commands, is either “Alexa” or “Amazon”. There is no other way to change this, even though it should be easily doable in the software. It should be obvious that people will want to call it “Computer” or “Jarvis”. But do not fret my hacker friends, for I have good news!

It appears that Amazon sees (or had seen all along) that home automation is the future of the Echo. They now officially support Philips Hue and Belkin WeMo gadgets. The Belkin WeMo, which is no stranger to the hacker’s workbench, has a good handle on home automation already, making the ability to control things in your house with the Echo tantalizingly close. See the video below where I test it out. Now, if you’re not excited yet, you haven’t heard of the WeMo Maker, a device which they claim will let you “Control nearly any low-voltage electronics device“. While the WeMo Maker is not supported as of yet, it surely will be in the near future.

We know it sucks that all of this is closed source. But it sure is cool! So here’s the question: Is the Echo the future of home automation? Sure, it has its obvious flaws, and one would think home automation is not exactly Amazon’s most direct business model (they just want you to buy stuff). However, it works very well as a home automation core. Possibility better than anything out there right now – both closed and open source.

Do you think Amazon would ever open the door to letting the Echo run open source modules which allow the community to add control of just about any wireless devices? Do you think that doing so would crown Amazon the king of home automation in the years to come?

Continue reading “Ask Hackaday: Is Amazon Echo the Future of Home Automation?”

Ask Hackaday: Quadcopter in Near Space?

Your mission, should you choose to accept it, is to send a quadcopter to near space and return it safely to the Earth. Getting it there is not that difficult. In fact, you can get pretty much anything you want to near space with a high altitude weather balloon. Getting it back on the ground in one piece is a whole other ballgame.

Why does someone need to do this? Well, it appears the ESA’s StarTiger team is taking a card out of NASA’s book and wants to use a Sky Crane to soft land a rover on Mars. But instead of using rockets to hold the crane steady in the Martian sky, they want to use…you guessed it, a quadcopter. They’re calling it the Dropter.

quadcopter on mars

At first glance, there seems to be a lot wrong with this approach. The atmosphere on Mars is about 100 times less dense than the Earth’s atmosphere at sea level. How do props operate in these conditions? Testing would need to be done of course, and the Earth’s upper atmosphere is the perfect place to carry out such testing. At 100,000 feet, the density of the stratosphere is about the same as that of the Martian surface atmosphere. AND 100,000 feet is prime high altitude balloon territory.  Not to mention the gravity on Mars is about 38% of Earth’s gravity, meaning a 5.5 pound model on Earth could accurately represent a 15 pound model on Mars.

With all of these facts taken into consideration, one can conclude that realistic testing of a scale model Martian quadcopter is within the grasp of the hacker community. We’ve seen some work on high altitude drones before, but never a quadcopter.

Now it’s your turn to do something no one has ever done before. Think you got what it takes to pull such a project off? Let us know what your approach to the challenge would be in the comments.

Continue reading “Ask Hackaday: Quadcopter in Near Space?”