The Machine that Japed: Microsoft’s Humor-Emulating AI

Ten years ago, highbrow culture magazine The New Yorker started a contest. Each week, a cartoon with no caption is published in the back of the magazine. Readers are encouraged to submit an apt and hilarious caption that captures the magazine’s infamous wit. Editors select the top three entries to vie for reader votes and the prestige of having captioned a New Yorker cartoon.

The magazine receives about 5,000 submissions each week, which are scrutinized by cartoon editor [Bob Mankoff] and a parade of assistants that burn out after a year or two. But soon, [Mankoff]’s assistants may have their own assistant thanks to Microsoft researcher [Dafna Shahaf].

[Dafna Shahaf] heard [Mankoff] give a speech about the New Yorker cartoon archive a year or so ago, and it got her thinking about the possibilities of the vast collection with regard to artificial intelligence. The intricate nuances of humor and wordplay have long presented a special challenge to creators. [Shahaf] wondered, could computers begin to learn what makes a caption funny, given a big enough canon?

[Shahaf] threw ninety years worth of wry, one-panel humor at the system. Given this knowledge base, she trained it to choose funny captions for cartoons based on the jokes of similar cartoons. But in order to help [Mankoff] and his assistants choose among the entries, the AI must be able to rank the comedic value of jokes. And since computer vision software is made to decipher photos and not drawings, [Shahaf] and her team faced another task: assigning keywords to each cartoon. The team described each one in terms of its contextual anchors and subsequently its situational anomalies. For example, in the image above, the context keywords could be car dealership, car, customer, and salesman. Anomalies might include claws, fangs, and zoomorphic automobile.

The result is about the best that could be hoped for, if one was being realistic. All of the cartoon editors’ chosen winners showed up among the AI’s top 55.8%, which means the AI could ultimately help [Mankoff and Co.] weed out just under half of the truly bad entries. While [Mankoff] sees the study’s results as a positive thing, he’ll continue to hire assistants for the foreseeable future.

Humor-enabled AI may still be in its infancy, but the implications of the advancement are already great. To give personal assistants like Siri and Cortana a funny bone is to make them that much more human. But is that necessarily a good thing?

[via /.]

Inceptionism: Mind Blown by What Neural Nets Think They See

Dr. Robert Hecht-Nielsen, inventor of one of the first neurocomputers, defines a neural network as:

“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

These ‘processing elements’ are generally arranged in layers – where you have an input layer, an output layer and a bunch of layers in between. Google has been doing a lot of research with neural networks for image processing. They start with a network 10 to 30 layers thick. One at a time, millions of training images are fed into the network. After a little tweaking, the output layer spits out what they want – an identification of what’s in a picture.

The layers have a hierarchical structure. The input layer will recognize simple line segments. The next layer might recognize basic shapes. The one after that might recognize simple objects, such as a wheel. The final layer will recognize whole structures, like a car for instance. As you climb the hierarchy, you transition from fast changing low level patterns to slow changing high level patterns. If this sounds familiar, we’ve talked out about it before.

Now, none of this is new and exciting. We all know what neural networks are and do. What is going to blow your knightmind, however, is a simple question Google asked, and the resulting answer. To better understand the process, they wanted to know what was going on in the inner layers. They feed the network a picture of a truck, and out comes the word “truck”. But they didn’t know exactly how the network came to its conclusion. To answer this question, they showed the network an image, and then extracted what the network was seeing at different layers in the hierarchy. Sort of like putting a serial.print in your code to see what it’s doing.

They then took the results and had the network enhance what it thought it detected. Lower levels would enhance low level features, such as lines and basic shapes. The higher levels would enhance actual structures, such as faces and trees. ibisThis technique gives them the level of abstraction for different layers in the hierarchy and reveals its primitive understanding of the image. They call this process inceptionism.


Be sure to check out the gallery of images produced by the process. Some have called the images dream like, hallucinogenic and even disturbing. Does this process reveal the inner workings of our mind? After all, our brains are indeed neural networks. Has Google unlocked the mind’s creative process?  Or is this just a neat way to make computer generated abstract art.

So here comes the big question: Is it the computer chosing these end-product photos or a google engineer pawing through thousands (or orders of magnitude more) to find the ones we will all drool over?

Hackaday Prize Entry: A Medical Tricorder

We have padds, fusion power plants are less than 50 years away, and we’re working on impulse drives. We’re all working very hard to make the Star Trek galaxy a reality, but there’s one thing missing: medical tricorders. [M. Bindhammer] is working on such a device for his entry for the Hackaday Prize, and he’s doing this in a way that isn’t just a bunch of pulse oximeters and gas sensors. He’s putting intelligence in his medical tricorder to diagnose patients.

In addition to syringes, sensors, and electronics, a lot of [M. Bindhammer]’s work revolves around diagnosing illness according to symptoms. Despite how cool sensors and electronics are, the diagnostic capabilities of the Medical Tricorder is really the most interesting application of technology here. Back in the 60s and 70s, a lot of artificial intelligence work went into expert systems, and the medical applications of this very rudimentary form of AI. There’s a reason ER docs don’t use expert systems to diagnose illness; the computers were too good at it and MDs have egos. Dozens of studies have shown a well-designed expert system is more accurate at making a diagnosis than a doctor.

While the bulk of the diagnostic capabilities rely on math, stats, and other extraordinarily non-visual stuff, he’s also doing a lot of work on hardware. There’s a spectrophotometer and an impeccably well designed micro reaction chamber. This is hardcore stuff, and we can’t wait to see the finished product.

As an aside, see how [M. Bindhammer]’s project has a lot of neat LaTeX equations? You’re welcome.

The 2015 Hackaday Prize is sponsored by:

Ask Hackaday: A Robot’s Black Market Shopping Spree

It was bad when kids first started running up cell phone bills with excessive text messaging. Now we’re living in an age where our robots can go off and binge shop on the Silk Road with our hard earned bitcoins. What’s this world coming to? (_sarcasm;)

For their project ‘Random Darknet Shopper’, Swiss artists [Carmen Weisskopf] and [Domagoj Smoljo] developed a computer program that was given 100 dollars in bitcoins and granted permission to lurk on the dark inter-ether and make purchases at its own digression. Once a week, the AI would carrying out a transaction and have the spoils sent back home to its parents in Switzerland. As the random items trickled in, they were photographed and put on display as part of their exhibition, ‘The Darknet. From Memes to Onionland’ at Kunst Halle St. Gallen. The trove of random purchases they received aren’t all illegal, but they will all most definitely get you thinking… which is the point of course. They include everything from a benign Lord of the Rings audio book collection to a knock-off Hungarian passport, as well as the things you’d expect from the black market, like baggies of ecstasy and a stolen Visa credit card. The project is meant to question current sanctions on trade and investigate the world’s reaction to those limitations. In spite of dabbling in a world of questionable ethics and hazy legitimacy, the artists note that of all the purchases made, not a single one of them turned out to be a scam.

Though [Weisskopf] and [Smoljo] aren’t worried about being persecuted for illegal activity, as Swiss law protects their right to freely express ideas publicly through art, the implications behind their exhibition did raise some questions along those lines. If your robot goes out and buys a bounty of crack on its own accord and then gives it to its owner, who is liable for having purchased the crack?

If a collection of code (we’ll loosely use the term AI here) is autonomous, acting independent of its creator’s control, should the creator still be held accountable for their creation’s intent? If the answer is ‘no’ and the AI is responsible for the repercussions, then we’re entering a time when its necessary to address AI as separate liable entities. However, if you can blame something on an AI, this suggests that it in some way has rights…

Before I get ahead of myself though, this whole notion circulates around the idea of intent. Can we assign an artificial form of life with the capacity to have intent?

Echo, the First Useful Home Computer Intelligence?

We’re familiar with features like Siri or Microsoft’s Cortana which grope at a familiar concept from science fiction, yet leave us doing silly things like standing in public yowling at our phones. Amazon took a new approach to the idea of an artificial steward by cutting the AI free from our peripherals and making it an independent unit that acts in the household like any other appliance. Instead of steering your starship however, it can integrate with your devices via bluetooth to aide in tasks like writing shopping lists, or simply help you remember how many quarts are in a liter. Whatever you ask for, Echo will oblige.

Screen Shot 2014-11-06 at 2.57.14 PMThe device is little more than the internet and a speaker stuffed into a minimal black cylinder the size of a vase, oh- and six far-field microphones aimed in each direction which listen to every word you say… always. As you’d expect, Echo only processes what you say after you call it to attention by speaking its given name. If you happen to be too far away for the directional microphones to hear, you can alternatively seek assistance from the Echo app on another device. Not bad for the freakishly low price Amazons asking, which is $100 for Prime subscribers. Even if you’re salivating over the idea of this chatting obelisk, or intrigued enough to buy one just to check it out (and pop its little seams), they’re only available to purchase through invite at the moment… the likes of which are said to go out in a few weeks.

The notion of the internet at large acting as an invisible ever-present swiss-army-knife of knowledge for the home is admittedly pretty sweet. It pulls on our wishful heartstrings for futuristic technology. The success of Echo as a first of its kind however relies on how seamlessly (and quickly) the artificial intelligence within it performs. If it can hold up, or prove to hold up in further iterations, it’s exciting to think what larger systems the technology could be integrated with in the near future… We might have our command center consciousness sooner than we thought.

With that said, inviting a little WiFi probe into your intimate living space to listen in on everything you do will take some getting over… your thoughts?

Continue reading “Echo, the First Useful Home Computer Intelligence?”

Next Week in NYC: How the Age of Machine Consciousness is Transforming Our Lives

I’ve developed or have been involved with a number of imaging technologies, everything from DIY synthetic aperture radar, the MIT thru-wall radar, to the next generation of ultrasound imaging devices. Imagery is cool, but what the end-user often wants is some way by which to get an answer as opposed to viewing a reconstruction. So let’s figure that out.

We’re kicking-off a discussion on how to apply deep learning to more than just beating Jeopardy champions at their own game. We’d like to apply deep learning to hard data, to imagery. Is it possible to get the computer to accurately provide the diagnosis?

I helped to organize a seminar series/discussion panel in New York City on November 13th (you know, for those readers who are closer to New York than to Munich). This discussion panel includes David Ferrucci (the guy who lead the IBM Watson program), MIT Astrophysicist Max Tagmark, and the person who created genetic sequencing on a chip: Jonathan Rothberg.  As the vanguard of creativity and enthusiasm in everything technical we’d like the Hackaday community to join the conversation.

Continue reading “Next Week in NYC: How the Age of Machine Consciousness is Transforming Our Lives”

Hacking the Sci-Fi Contest Team Requirement


We saw that some readers were not entirely happy with the team requirement for our Sci-Fi contest, which is running right now. We figured that those who do not work well with others might commit a bit of fraud to get around the requirement. But we’re delighted that someone found a much more creative solution. Why not enlist an AI to collaborate on your project?

[Colabot] is a hacker profile over on which is driven by ELIZA, a computer program that achieves limited interaction through natural language. Supposedly you add [Colabot] to your project and as it questions. We asked one on the profile page and are still awaiting the response. We think this itself could be a qualifying entry for the Sci-Fi contest if someone can find the right thematic spin to put on it.

As far as contest entries go there are only seven so far. Since everyone who submits an entry gets a T-shirt, and there are 15 total prize packages, we encourage you to post your entry as soon as possible. We want to see teams from hackerspaces and we can cryptically tell you that good things come to teams who post their project with the “sci-fi-contest” tag early!