Wing Opens The Skies For Drones With UTM

Yesterday Alphabet (formerly known as Google) announced that their Wing project is launching delivery services per drone in Finland, specifically in a part of Helsinki. This comes more than a month after starting a similar pilot program in North Canberra, Australia. The drone design Wing has opted for consists not of the traditional quadcopter design, but a hybrid plane/helicopter design, with two big propellers for forward motion, along with a dozen small propellers on the top of the dual body design, presumably to give it maximum range while still allowing the craft to hover.

With a weight of 5 kg and a wingspan of about a meter, Wing’s drones are capable of lifting and carrying a payload of about 1.5 kg. This puts it into a category of drones far beyond of what hobbyists tend to fly on a regular basis, and worse, it involves Beyond Visual Line Of Sight (BVLOS for short) flying, which is frowned upon by the FAA and similar regulatory bodies. What Google/Alphabet figures that can enable them to make this kind of service a commercial reality is called Unmanned aircraft system Traffic Management (UTM).

UTM is essentially complementary to the existing air traffic control systems, allowing drones to integrate into these flows of manned airplanes without endangering either. Over the past years, it’s been part of NASA’s duty to develop the systems and infrastructure that would be required to make UTM a reality. Working together with the FAA and companies such as Amazon and Alphabet, the hope is that before long it’ll be as normal to send a drone into the skies for deliveries and more as it is today to have passenger and cargo planes with human pilots take to the skies.

Make Cars Safer By Making Them Softer

Would making autonomous vehicles softer make them safer?

Alphabet’s self-driving car offshoot, Waymo, feels that may be the case as they were recently granted a patent for vehicles that soften on impact. Sensors would identify an impending collision and adjust ‘tension members’ on the vehicle’s exterior to cushion the blow. These ‘members’ would be corrugated sections or moving panels that absorb the impact alongside the crumpling effect of the vehicle, making adjustments based on the type of obstacle the vehicle is about to strike.

Continue reading “Make Cars Safer By Making Them Softer”

Red Bricks: Alphabet To Turn Off Revolv’s Lights

Revolv, the bright red smart home hub famous for its abundance of radio modules, has finally been declared dead by its founders. After a series of acquisitions, Google’s parent company Alphabet has gained control over Revolv’s cloud service – and they are shutting it down.

Customers who bought into Revolv’s vision of a truly connected and automated smart home hub featuring 7 different physical radio modules to connect all their devices will soon become owners of significantly less useful, red bricks due to the complete shutdown of the service on May 15, 2016.
Continue reading “Red Bricks: Alphabet To Turn Off Revolv’s Lights”

Ask Hackaday: Google Beat Go; Bellwether Or Hype?

We wake up this morning to the news that Google’s deep-search neural network project called AlphaGo has beaten the second ranked world Go master (who happens to be a human being). This is the first of five matches between the two adversaries that will play out this week.

On one hand, this is a sign of maturing technology. It has been almost twenty years since Deep Blue beat Gary Kasparov, the reigning chess world champion at the time. Although there are still four games to play against Lee Sedol, it was recently reported that AlphaGo beat European Go champion Fan Hui in five games straight. Go is generally considered a more difficult game for machine minds to play than chess. This is because Go has a much larger pool of possible moves at any given time.

Does This Matter?

Okay, the news part of this event has been covered: machine beats man. Does it matter? Will this affect your life and how? We want to hear what you think in the comments below. But I’m going to keep going with some of my thoughts on the topic.

You're still better at Ms. Pacman [Source: DeepMind paper in Nature]
You’re still better at Ms. Pacman [Source: DeepMind paper in Nature]
Let’s look first at what AlphaGo did to win. At its core, the game of Go is won by figuring out where your opponent will likely make a low-percentage move and then capitalizing on that choice. Know Your Enemy has been a tenet of strategy for a few millennia now and it holds true in the digital age. In addition to the rules of the game, AlphaGo was fed a healthy diet of 30 million positions from expert games. This builds behavior recognition into the system. Not just what moves can be made, but what moves are most likely to be made.

DeepMind, the company behind AlphaGo which was acquired by Google in 2014, has published a paper in Nature about their approach. They were even nice enough to let us read without dealing with a paywall. The secret sauce is the learning process which at its core tries to mimic how living entities learn: observe repetitively while assigning values to outcomes. This is key as it leads past “intellect”, to “intelligence” (the “I” in AI that everyone seems to be waiting for). But this is a bastardized version of “intelligence”. AlphaGo is able to recognize and predict behavior, then make choices that lead to a desired outcome. This is more than intellect as it does value the purpose of an opponent’s decisions. But it falls short of intelligence as AlphaGo doesn’t consciously understand the purpose it has detected. In my mind this is exactly what we need. Truly successful machine learning will be able to make sense out of sometimes irrational input.

The paper from Nature doesn’t go into details about Go, but it explains the approach of the learning system applied to Atari 2600. The algorithm was given 210×160 color video at 60Hz as an input and then told it could use a joystick with one button. From there it taught itself to play 49 games. It was not told the purpose or the rules of the games, but it was given examples of scores from human performance and rewarded for its own quality performances. The chart above shows that it learned to play 29 of them at or above human skill levels.

Continue reading “Ask Hackaday: Google Beat Go; Bellwether Or Hype?”