Ask Hackaday: Google Beat Go; Bellwether or Hype?

We wake up this morning to the news that Google’s deep-search neural network project called AlphaGo has beaten the second ranked world Go master (who happens to be a human being). This is the first of five matches between the two adversaries that will play out this week.

On one hand, this is a sign of maturing technology. It has been almost twenty years since Deep Blue beat Gary Kasparov, the reigning chess world champion at the time. Although there are still four games to play against Lee Sedol, it was recently reported that AlphaGo beat European Go champion Fan Hui in five games straight. Go is generally considered a more difficult game for machine minds to play than chess. This is because Go has a much larger pool of possible moves at any given time.

Does This Matter?

Okay, the news part of this event has been covered: machine beats man. Does it matter? Will this affect your life and how? We want to hear what you think in the comments below. But I’m going to keep going with some of my thoughts on the topic.

You're still better at Ms. Pacman [Source: DeepMind paper in Nature]
You’re still better at Ms. Pacman [Source: DeepMind paper in Nature]
Let’s look first at what AlphaGo did to win. At its core, the game of Go is won by figuring out where your opponent will likely make a low-percentage move and then capitalizing on that choice. Know Your Enemy has been a tenet of strategy for a few millennia now and it holds true in the digital age. In addition to the rules of the game, AlphaGo was fed a healthy diet of 30 million positions from expert games. This builds behavior recognition into the system. Not just what moves can be made, but what moves are most likely to be made.

DeepMind, the company behind AlphaGo which was acquired by Google in 2014, has published a paper in Nature about their approach. They were even nice enough to let us read without dealing with a paywall. The secret sauce is the learning process which at its core tries to mimic how living entities learn: observe repetitively while assigning values to outcomes. This is key as it leads past “intellect”, to “intelligence” (the “I” in AI that everyone seems to be waiting for). But this is a bastardized version of “intelligence”. AlphaGo is able to recognize and predict behavior, then make choices that lead to a desired outcome. This is more than intellect as it does value the purpose of an opponent’s decisions. But it falls short of intelligence as AlphaGo doesn’t consciously understand the purpose it has detected. In my mind this is exactly what we need. Truly successful machine learning will be able to make sense out of sometimes irrational input.

The paper from Nature doesn’t go into details about Go, but it explains the approach of the learning system applied to Atari 2600. The algorithm was given 210×160 color video at 60Hz as an input and then told it could use a joystick with one button. From there it taught itself to play 49 games. It was not told the purpose or the rules of the games, but it was given examples of scores from human performance and rewarded for its own quality performances. The chart above shows that it learned to play 29 of them at or above human skill levels.

Continue reading “Ask Hackaday: Google Beat Go; Bellwether or Hype?”

Ask Hackaday: Is PLA Biodegradable?

The most popular plastic for 3D printers is PLA – polylactic acid – a plastic that’s either derived from corn starch, inedible plant detritus, or sugar cane, depending where in the world it was manufactured. Being derived from natural materials, PLA is marketed as being biodegradable. You don’t need to worry about low-poly Pokemon and other plastic trinkets filling landfills when you’re printing with PLA, all these plastic baubles will return to the Earth from whence it came.

3D printers have been around for a few years now, and now objects printed in PLA have been around the sun a few times. A few of these objects have been completely forgotten. How’s that claim of being biodegradable holding up? The results are mixed, and as always, more data is needed.

A few weeks ago, [LazyGecko] found one of his first experiments in 3D printing. In 2012, he was experimenting with tie dying PLA prints by putting his prints in a jar filled with water and blue dye. This jar was then placed in the back of his cupboard and quickly forgotten. 3.5 years later, [LazyGecko] remembered his experiment. Absolutely nothing happened, save for a little bit of blue dye turning the print a pastel baby blue. The print looks and feels exactly like the day it came off the printer.

[LazyGecko]’s blog post was noticed by [Bill Waters], and he has one datum that points to PLA being biodegradable. In 2015, [Bill] printed a filter basket for his fish tank. The first filter basket worked well, but made a small design change a week later, printed out another, and put the first print in storage. He now has two nearly identical prints, one in constant use in a biologically interesting environment, the other sitting on a shelf for a year.

[Bill]’s inadvertent experiment is very close to the best possible experimental design to make the case for PLA biodegradability. The 3D printed filter basket in constant use for a year suffered significant breakdown, and the honeycomb walls are starting to crumble. The ‘inert’ printed filter basket looks like it just came off the build plate.

If that’s not confusing enough, [Bill] also has another print that has spent a year in a fish tank. This end cap for a filter spray bar didn’t see any degradation, despite being underwater in a biologically active environment. The environment is a little different from a filter basket, though; an aquarium filter is designed to break down organics.

To answer the question, ‘is PLA biodegradable,’ the most accurate answer is, ‘maybe’. Three data points in uncontrolled environments isn’t enough to draw any conclusions. There are, undoubtedly, more forgotten 3D prints out there, and more data to back up the claim of PLA being biodegradable.

This is where you come in. Do you have some forgotten prints out there? Your input is needed, the fruits of your labors are evidence, your prints might be decaying and we want to know about it below.

Ask Hackaday: Selling CAD Prints That Are Not Yours

[Louise] tried out her new E3D Cyclops dual extrusion system by printing a superb model dragon. The piece was sculpted in Blender, stands 13cm tall and can be made without supports. It’s an impressive piece of artwork that reflects the maker’s skill, dedication and hard work. She shared her creation on the popular Thingiverse website which allows others to download the file for use on their own 3D printer. You can imagine her surprise when she stumbled upon her work being sold on eBay.

It turns out that the owner of the eBay store is not just selling [Louise]’s work, he’s selling thousands of other models taken from the Thingiverse site. This sketchy and highly unethical business model has not gone unnoticed, and several people have launched complaints to both Thingiverse and eBay. Now, there are lots of things to talk about here, but the 800 pound high voltage transformer in the room is the legality of the whole thing. What he’s doing might be unethical, but is it illegal?

When [Louise] politely asked the eBay store owner to remove her work, he responded with:

“When you uploaded your items onto Thingiverse for mass distribution, you lost all rights to them whatsoever. They entered what is known in the legal world as “public domain”. The single exception to public domain rules are original works of art. No court in the USA has yet ruled a CAD model an original work or art.”

Most of the uploaded CAD models on Thingiverse are done under the Creative Commons license, which is pretty clear in its assertion that anyone can profit from the work. This would seem to put the eBay store owner in the clear for selling the work, but it should be noted that he’s not properly attributing the work to the original creator. There are other derivatives of the license, some of which prohibit commercial use of the work. In these cases, the eBay store owner would seem to be involved in an obvious violation of the license.

There are also questions stirring with his use of images.  He’s not taking the CAD model and making his own prints for images. He lifting the images of the prints from the Thingiverse site along with the CAD files. It’s a literal copy/paste business model.

With that said,  the eBay store owner makes a fairly solid argument in the comments section of the post that broke the news. Search for the poster named “JPL” and the giant brick of text to read it. He argues that the Thingiverse non-commercial license is just lip service and has no legal authority. One example of this is how they often provide links to companies that will print a CAD design on the same page of a design that’s marked as non-commercial. He sums up one of many good points with the quote below:

“While we could list several other ways Thingiverse makes (money), any creator should get the picture by now-Thingiverse exists to make Stratasys (money) off of creators’ designs in direct violation of its very own “non-commercial” license. If a creator is OK with a billion-dollar Israeli company monetizing his/her designs, but hates on a Philly startup trying to make ends-meet, then they have a very strange position indeed.”

OK Hackaday readers, you have heard both sides of the issue. Here’s the question(s):

1.  Is the eBay seller involved in illegal activity?

2. Can he change his approach to stay within the limits of the license? For instance, what if he credits the                      original maker on the sale page?

3. How would you feel if you found your CAD file for sale on his eBay store?

Ask Hackaday: I Love The Smell Of Burnt Hair In The Morning

At the end of the 19th century, [King Camp Gillette] had the idea of creating a disposable razor blade that didn’t need sharpening. There was one problem with this idea: metallurgy was not yet advanced enough to produce paper-thin carbon steel blades and sharpen them for a close shave. In 1901, [William Nickerson] solved this problem, and the age of disposable razors began.

The Skarp laser razor. Source
The Skarp laser razor

This Kickstarter would have you believe there is a new era of beard technology dawning. It’s a laser razor called Skarp, and it’s on track to become one of the most funded Kickstarters of all time. The only problem? Even with relatively good documentation on the Kickstarter campaign, a demo video, a patent, and an expert in the field of cosmetic lasers, only the creators can figure out how it works.

Instead of using technology that has been tried and tested for thousands of years, the Skarp uses a laser to shave hairs off, right at the surface of the skin. You need only look at a billboard for laser hair removal to realize this is possible, but building a laser razor is something that has eluded us for decades. This patent from 1986 at the very least demonstrates the beginnings of the idea – put a laser beam in a handheld package and plunge it into a beard. This patent from 2005 uses fiber optics to send a laser beam to a handheld razor. Like anything out of the sci-fi genre, a laser razor is a well-tread idea in the world of invention.

But Skarp thinks it has solved all of the problems which previously block lasers from finding a place in your medicine cabinet.

Continue reading “Ask Hackaday: I Love The Smell Of Burnt Hair In The Morning”

Ask Hackaday: Arduino in Consumer Products

Speak with those who consider themselves hardcore engineers and you might hear “Arduinos are for noobs” or some other similar nonsense. These naysayers see the platform as a simplified, overpriced, and over-hyped tool that lets you blink a few LEDs or maybe even read a sensor or two. They might say that Arduino is great for high school projects and EE wannabes tinkering in their garage, but REAL engineering is done with ARM, x86 or PICs. Guess what? There are Arduino compatible boards built around all three of those architectures. Below you can see but three examples in the DUE, Galileo, and Fubarino SD boards.

This attitude towards Arduino exists mainly out of ignorance. So let’s break down a few myths and preconceived biases that might still be lurking amongst some EEs and then talk about Arduino’s ability to move past the makers.

Arduino is NOT the Uno

When some hear “Arduino”, they think of that little blue board that you can plug a 9v battery into and start making stuff. While this is technically true, there’s a lot more to it than that.

  1. An Arduino Uno is just an AVR development board. AVRs are similar to PICs. When someones says “I used a PIC as the main processor”, does that mean they stuck the entire PIC development board into their project? Of course not. It’s the same with Arduino (in most cases), and design is done the same way as with any other microcontroller –
    • Use the development board to make, create and debug.
    • When ready, move the processor to your dedicated board.
  2. What makes an Arduino an “Arduino” and not just an AVR is the bootloader. Thus:
    • An Atmega328P is an AVR processor.
    • An Atmega328P with the Arduino bootloader is an Arduino.
  3. The bootloader allows you to program the AVR with the Arduino IDE. If you remove the bootloader from the AVR, you now have an AVR development board that can be programmed with AVR Studio using your preferred language.

There Is No Special Arduino Language

Arduino "blink" sketch should run on any Arduino compatible board.
Arduino “blink” sketch should run on any Arduino compatible board.

Yes, I know they call them sketches, which is silly. But the fact is it’s just c++. The same c++ you’d use to program your PIC. The bootloader allows the IDE to call functions, making it easy to code and giving Arduino its reputation of being easy to work with. But don’t let the “easy” fool you. They’re real c/c++ functions that get passed to a real c/c++ compiler. In fact, any c/c++ construct will work in the Arduino IDE. With that said – if there is any negative attribute to Arduino, it is the IDE. It’s simple and there is no debugger.

The strength comes in the standardization of the platform. You can adapt the Arduino standard to a board you have made and that adaptation should allow the myriad of libraries for Arduino to work with your new piece of hardware. This is a powerful benefit of the ecosystem. At the same time, this easy of getting things up and running has resulted in a lot of the negative associations discussed previously.

So there you have it. Arduino is no different from any other microcontroller, and is fully capable of being used in consumer products along side PICs, ARMs etc. To say otherwise is foolish.

What is the Virtue of Arduino in Consumer Products?

This is Ask Hackaday so you know there’s a question in the works. What is the virtue of Arduino in consumer products? Most electronics these days have a Device Firmware Upgrade (DFU) mode that allows the end user to upgrade the code, so Arduino doesn’t have a leg up there. One might argue that using Arduino means the code is Open Source and therefore ripe for community improvements but closed-source binaries can still be distributed for the platform. Yet there are many products out there that have managed to unlock the “community multiplier” that comes from releasing the code and inviting improvements.

What do you think the benefits of building consumer goods around Arduino are, what will the future look like, and how will we get there? Leave your thoughts below!

Inceptionism: Mind Blown by What Neural Nets Think They See

Dr. Robert Hecht-Nielsen, inventor of one of the first neurocomputers, defines a neural network as:

“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

These ‘processing elements’ are generally arranged in layers – where you have an input layer, an output layer and a bunch of layers in between. Google has been doing a lot of research with neural networks for image processing. They start with a network 10 to 30 layers thick. One at a time, millions of training images are fed into the network. After a little tweaking, the output layer spits out what they want – an identification of what’s in a picture.

The layers have a hierarchical structure. The input layer will recognize simple line segments. The next layer might recognize basic shapes. The one after that might recognize simple objects, such as a wheel. The final layer will recognize whole structures, like a car for instance. As you climb the hierarchy, you transition from fast changing low level patterns to slow changing high level patterns. If this sounds familiar, we’ve talked out about it before.

Now, none of this is new and exciting. We all know what neural networks are and do. What is going to blow your knightmind, however, is a simple question Google asked, and the resulting answer. To better understand the process, they wanted to know what was going on in the inner layers. They feed the network a picture of a truck, and out comes the word “truck”. But they didn’t know exactly how the network came to its conclusion. To answer this question, they showed the network an image, and then extracted what the network was seeing at different layers in the hierarchy. Sort of like putting a serial.print in your code to see what it’s doing.

They then took the results and had the network enhance what it thought it detected. Lower levels would enhance low level features, such as lines and basic shapes. The higher levels would enhance actual structures, such as faces and trees. ibisThis technique gives them the level of abstraction for different layers in the hierarchy and reveals its primitive understanding of the image. They call this process inceptionism.

 

Be sure to check out the gallery of images produced by the process. Some have called the images dream like, hallucinogenic and even disturbing. Does this process reveal the inner workings of our mind? After all, our brains are indeed neural networks. Has Google unlocked the mind’s creative process?  Or is this just a neat way to make computer generated abstract art.

So here comes the big question: Is it the computer chosing these end-product photos or a google engineer pawing through thousands (or orders of magnitude more) to find the ones we will all drool over?

Ask Hackaday (And Adafruit): The New CEO Of MakerBot

Just a few years ago, MakerBot was the darling of the Open Hardware community. Somehow, in the middle of a garage in Brooklyn, a trio of engineers and entrepreneurs became a modern-day Prometheus, capturing a burgeoning technology into a compact, easy to use, and intoxicating product. A media darling was created, a disruptive technology was popularized, and an episode of the Colbert Report was taped.

The phrase ‘meteoric rise’ doesn’t make sense, and since then the reputation of MakerBot has fallen through the floor, crashed through the basement, and is now lodged in one of the higher circles of hell. It’s not surprising; MakerBot took creations from their 3D object hosting site, Thingiverse, and patented them. The once-Open Source line of 3D printers was locked up behind a closed license. The new MakerBot extruder – the Smart Extruder – is so failure prone MakerBot offers a three pack, just so you’ll always have a replacement on hand. False comparisons to Apple abound; Apple contributes to Open Source projects. The only other way for a company to lose the support of the community built around it so quickly would be a name change to Puppy Kickers, LLC.

In the last few months, figurehead CEO of MakerBot [Bre Pettis] was released from contractual obligations, and MakerBot’s parent company, Stratasys, has filled the executive ranks with more traditional business types. It appears PR and Marketing managers have noticed the bile slung at their doorstep, and now MakerBot is reaching out to the community. Their new CEO, [Jonathan Jaglom] specifically requested a hot seat be built at Adafruit for an open discussion and listening meeting. Yes, this means Makerbot is trying to get back on track, winning the hearts and minds of potential customers, and addressing issues Internet forums repeat ad nauseam.

If you’ve ever wanted to ask a CEO how they plan to stop screwing things up, this is your chance. Adafruit is looking for some direction for their interview/listening meeting, and they’re asking the community for the most pressing issues facing the 3D printing community, the Open Source community, and MakerBot the company.

Already on the docket are questions about MakerBot and Open Source, MakerBot’s desire to put DRM in filament, the horrors of the Smart Extruder and the 5th generation MakerBots, problems with Thingiverse, and the general shitty way MakerBot treats its resellers.

This isn’t all Adafruit wants to ask; the gloves are off, nothing is off the table, and they’re looking for questions from the community. What would you like to ask the MakerBot CEO?

Personally, the best interview questions are when the interviewee’s own words are turned around on them. By [Jonathan Jaglom]’s own admission, the barrier to entry for 3D design work has been substantially lowered in the last three years, ostensibly because of incredible advances in Open Source projects. Following this, do MakerBot and Stratasys owe a debt to Open Source projects, and should Stratasys contribute to the rising tide of Open Source development?

That’s just one question. There will, of course, be many more. Leave them down in the comments. “You are not [Tim Cook],” while a valid statement in many respects, is not a question.