Meet Blue Jay, The Flying Drone Pet Butler

20 students of the Eindhoven University of Technology (TU/e) in the Netherlands share one vision of the future: the fully domesticated drone pet – a flying friend that helps you whenever you need it and in general, is very, very cute. Their drone “Blue Jay” is packed with sensors, has a strong claw for grabbing and carrying cargo, navigates autonomously indoors, and interacts with humans at eye level.

Continue reading “Meet Blue Jay, The Flying Drone Pet Butler”

Impressive StarCraft 2 AI More Fair To Fleshy Opponents

There was a discussion in the comments when the Alpha Go results were released. Some commentors were postulating that AI researchers are discounting more fluid games such as the RTS StarCraft.

The comments then devolved into a discussion of what would make the AI fair to consider against a human player. Many times, AI in RTS games win because they have direct access to the variables in the game. Rather than physically looking at the small area of the screen where a unit is located and then moving their eye to take in strategic information like exact location, health, unit level, etc, the AI just knows that it’s at 120x,2000y,76%,lvl5, etc instantly. The AI also has no click lag as it gets direct access to the game’s API, it simply changes the variables and action queue of a unit directly.

So we were interested to see [Matt]’s Star Craft AI that required the computer to actually look at the game board and click. [Matt]’s AI doesn’t see using OpenCV, which in its own way is forcing the computer to look in a way that’s unnatural to it. He instead wrote some code to intercept the behind the scenes calls to the DirectX library.

The computer is then able to make determinations about what it is looking at using the texture information and other pieces sent to the library. Unlike AI’s that get a direct look at the variables, it has to then translate this and keep its own mental picture of the map and the situation. If a building is destroyed, for example, it has to go over and look at that part of the map, test what it’s seeing against a control, and then remove the building from its list.

The AI’s one big advantage are its robot fingers. Even though this AI has to click on the interface, it doesn’t do it with a weak articulated fleshy nub like the rest of us. This allows the AI to get crazy Actions Per Minute (APM) in the range of 500 to 2000.

The AI has only been tested against StarCraft’s built in cheater bots. So far it can win most games against the hard level bots. If you want to see a video of what the AI is looking at, check after the break.

Continue reading “Impressive StarCraft 2 AI More Fair To Fleshy Opponents”

A Short History Of AI, And Why It’s Heading In The Wrong Direction

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Continue reading “A Short History Of AI, And Why It’s Heading In The Wrong Direction”

Who Is Responsible When Machines Kill?

This morning I want you to join me in thinking a few paces into the future. This mechanism let’s us discuss some hard questions about automation technology. I’m not talking about thermostats, porch lights, and coffee makers. The things that we really need to think about are the machines that can cause harm. Like self-driving cars. Recently we looked at the ethics behind decisions made by those cars, but this is really just the tip of the iceberg.

A large chunk of technology is driven by military research (the Internet, the space race, bipedal robotics, even autonomous vehicles through the DARPA Grand Challenge). It’s easy to imagine that some of the first sticky ethical questions will come from military autonomy and unfortunate accidents.

Continue reading “Who Is Responsible When Machines Kill?”

Gmail One Step Closer To Human Enslavement

Apply some lessons learned in Sci-Fi literature and you’ll come to the same realization I have: Google is going to unknowingly enslave humanity to an artificial intelligence.

I read a lot of science fiction. Generally, the future of technology can be found in great novels if you read between the lines. One of my favorites in this regard is, of course, [Neal Stephenson] who writes cripplingly long books that are totally worth the read due to his brand of fact-backed forward thinking. Look back on my posts here at Hackaday and you’ll see that I frequently apply concepts from his book The Diamond Age to what we see in emerging technology.

Last year my friend [Nils] suggested I give [William Hertling] a try, specifically his Singularity Series which starts with the novel Avogadro Corp. The fictional company is the world leader in free email and data storage. Sound like someone we know? One of the research projects within the company is an email plugin called ELOPe that will parse all past communications and choose topics and phrases that have the highest probability of eliciting a positive response from the recipient. When funding for the project is threatened, the system is turned on. I’d like to avoid spoilers, but let’s just say this puts the system on a path toward enslaving society.

Google is now boasting “Machine Intelligence for You”. It’s a research project based around Gmail which is called Inbox. Inbox has been around for a while but the newly announced feature is an algorithm that reads the email for you and suggests a set of responses. Compared to Avogadro Corp this is only missing two things: the ability to respond automatically, and the directive to protect itself at all costs.

One of the things I liked best about [William Hertling’s] take on an Artificial Intelligence was the low-key nature of the entity. It wasn’t a super-high-level thinker that interacts just like a human would. It was a poor choice by one programmer that led to horrible and far-reaching unintended consequences. No, I don’t really think Google’s Inbox will enslave us. But I appreciate the irony of life imitating art.

[via PopSci]

The Machine That Japed: Microsoft’s Humor-Emulating AI

Ten years ago, highbrow culture magazine The New Yorker started a contest. Each week, a cartoon with no caption is published in the back of the magazine. Readers are encouraged to submit an apt and hilarious caption that captures the magazine’s infamous wit. Editors select the top three entries to vie for reader votes and the prestige of having captioned a New Yorker cartoon.

The magazine receives about 5,000 submissions each week, which are scrutinized by cartoon editor [Bob Mankoff] and a parade of assistants that burn out after a year or two. But soon, [Mankoff]’s assistants may have their own assistant thanks to Microsoft researcher [Dafna Shahaf].

[Dafna Shahaf] heard [Mankoff] give a speech about the New Yorker cartoon archive a year or so ago, and it got her thinking about the possibilities of the vast collection with regard to artificial intelligence. The intricate nuances of humor and wordplay have long presented a special challenge to creators. [Shahaf] wondered, could computers begin to learn what makes a caption funny, given a big enough canon?

[Shahaf] threw ninety years worth of wry, one-panel humor at the system. Given this knowledge base, she trained it to choose funny captions for cartoons based on the jokes of similar cartoons. But in order to help [Mankoff] and his assistants choose among the entries, the AI must be able to rank the comedic value of jokes. And since computer vision software is made to decipher photos and not drawings, [Shahaf] and her team faced another task: assigning keywords to each cartoon. The team described each one in terms of its contextual anchors and subsequently its situational anomalies. For example, in the image above, the context keywords could be car dealership, car, customer, and salesman. Anomalies might include claws, fangs, and zoomorphic automobile.

The result is about the best that could be hoped for, if one was being realistic. All of the cartoon editors’ chosen winners showed up among the AI’s top 55.8%, which means the AI could ultimately help [Mankoff and Co.] weed out just under half of the truly bad entries. While [Mankoff] sees the study’s results as a positive thing, he’ll continue to hire assistants for the foreseeable future.

Humor-enabled AI may still be in its infancy, but the implications of the advancement are already great. To give personal assistants like Siri and Cortana a funny bone is to make them that much more human. But is that necessarily a good thing?

[via /.]

image of drfitwood on a beach

Ask Hackaday: Not Your Mother’s Feedback

Imagine you were walking down a beach, and you came across some driftwood resting against a pile of stones. You see it in the distance, and your brain has no trouble figuring out what you’re looking at. You see driftwood and rocks – you can clearly distinguish between the two objects without a second thought.

Think about the raw data entering the brain. The textures of the rocks and the driftwood are similar. The colors are similar. The irregular shapes are similar. Thus the raw data entering the brain’s V1 area for both objects must be similar as well. Now think about the borders that separate the pieces of driftwood from the edges of the rocks. From a raw data perspective, there is no border, and likewise no separation because the two objects are so similar.  But yet your brain can clearly see a rock and a piece of driftwood – two distinctly different objects. So how does the brain do this? How does it so easily differentiate between the two? If the raw data on either side of the border separating the wood and the rocks is the same, then there must be an outside influence determining where that border is. Jeff Hawkins believes this outside influence is a very special and most interesting type of feedback. Read on as we explain and attempt to implement this form of feedback in our hierarchical structure of invariant representations.

Continue reading “Ask Hackaday: Not Your Mother’s Feedback”