AI And Art Appreciation

In 2019, using AI to evaluate artwork is finally more productive than foolish. We all hope that someday soon our Roomba will judge our living habits and give unsolicited advice on how we could spruce things up with a few pictures and some natural light. There is already an extensive amount of Deep Learning dedicated to photo recognition but a team in Croatia is adapting them for use on fine art. It makes sense that everything is geared toward cameras since most of us have a vast photographic portfolio but fine art takes longer to render. Even so, the collection on Wikiart.org is vast and already a hotbed for computer classification work, so they set to work there.

As they modify existing convolutional neural networks, they check themselves by comparing results with human ratings to keep what works and discard what flops. Fortunately, fine art has a lot of existing studies and commentary, whereas the majority of photographs in the public domain have nothing more than a file name and maybe some EXIF data. The difference here is that photograph-parsing AI can say, “That is a STOP sign,” while the fine art AI can say, “That is a memorable painting of a sign.” Continue reading “AI And Art Appreciation”

AI-Enabled Teletype Live Streams Nearly Coherent Conversations

If you’ve got a working Model 33 Teletype, every project starts to look like an excuse to use it. While the hammering, whirring symphony of a teleprinter going full tilt brings to mind a simpler time of room-sized computers and 300 baud connections, it turns out that a Teletype makes a decent AI conversationalist, within the limits of AI, of course.

The Teletype machine that [Hugh Pyle] used for this interesting project, a Model 33 ASR with the paper tape reader, is a nostalgia piece that figures prominently in many of his projects. As such, [Hugh] has access to tons of Teletype documentation, so when OpenAI released their GPT-2 text generation language model, he decided to use the docs as a training set for the model, and then use the Teletype to print out text generated by the model. Initial results were about as weird as you’d expect for something trained on technical docs from the 1960s. The next step was obvious: make a chat-bot out of it and stream the results live. The teletype can be seen clattering away in the recorded stream below, using the chat history as a prompt for generating text responses, sometimes coherent, sometimes disturbing, and sometimes just plain weird.

Alas, the chat-bot and stream are only active a couple of times a week, so you’ll have to wait a bit to try it out. But it looks like a fun project, and we appreciate the mash-up of retro tech and AI. We’ve seen teleprinters revived for modern use before, both for texting and Tweeting, but this one almost has a mind of its own.

Continue reading “AI-Enabled Teletype Live Streams Nearly Coherent Conversations”

AI At The Edge Hack Chat

Join us Wednesday at noon Pacific time for the AI at the Edge Hack Chat with John Welsh from NVIDIA!

Machine learning was once the business of big iron like IBM’s Watson or the nearly limitless computing power of the cloud. But the power in AI is moving away from data centers to the edge, where IoT devices are doing things once unheard of. Embedded systems capable of running modern AI workloads are now cheap enough for almost any hacker to afford, opening the door to applications and capabilities that were once only science fiction dreams.

John Welsh is a Developer Technology Engineer with NVIDIA, a leading company in the Edge computing space. He’ll be dropping by the Hack Chat to discuss NVIDIA’s Edge offerings, like the Jetson Nano we recently reviewed. Join us as we discuss NVIDIA’s complete Jetson embedded AI product line up, getting started with Edge AI, and where Edge AI is headed.

join-hack-chat

Our Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, May 1 at noon Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

But Can Your AI Recognize Slugs?

The common garden slug is a mystery. Observing these creatures as they slowly emerge from their slimy lairs each evening, it’s hard to imagine how much damage they can do. With paradoxical speed, they can mow down row after row of tender seedlings, leaving nothing but misery in their mucusy wake.

To combat this slug menace, [Tegwyn☠Twmffat] (the [☠] is silent) is developing this AI-powered slug busting system. The squeamish or those challenged by the ethics of slug eradication can relax: no slugs have been harmed yet. So far [Tegwyn] has concentrated on the detection of slugs, a considerably non-trivial problem since there are few AI models that are already trained for slugs.

So far, [Tegwyn] has acquired 5,712 images of slugs in their natural environment – no mean feat as they only come out at night, they blend into their background, and their slimy surface makes for challenging reflections. The video below shows moderate success of the trained model using a static image of a slug; it also gives a glimpse at the hardware used, which includes an Nvidia Jetson TX2. [Tegwyn] plans to capture even more images to refine the model and boost it up from the 50 to 60% confidence level to something that will allow for the remediation phase of the project, which apparently involves lasers. Although he’s willing to entertain other methods of disposal; perhaps a salt-shooting turret gun?

This isn’t the first garden-tending project [Tegwyn] has tackled. You may recall The Weedinator, his 2018 Hackaday Prize entry. This slug buster is one of his entries for the 2019 Hackaday Prize, which was just announced. We’re looking forward to seeing the onslaught of cool new projects everyone will be coming up with.

Continue reading “But Can Your AI Recognize Slugs?”

Hackaday Links Column Banner

Hackaday Links: March 17, 2019

There’s now an official Raspberry Pi keyboard and mouse. The mouse is a mouse clad in pink and white plastic, but the Pi keyboard has some stuff going for it. It’s small, which is what you want for a Pi keyboard, and it has a built-in USB hub. Even Apple got that idea right with the first iMac keyboard. The keyboard and mouse combo are available for £22.00

A new Raspberry Pi keyboard and a commemorative 50p coin from the Royal Mint featuring the works of Stephen Hawking? Wow, Britain is tearing up the headlines recently.

Just because, here’s a Power Wheels Barbie Jeep with a 55 HP motor. Interesting things to note here is how simple this build actually is. If you look at some of the Power Wheels Racing cars, they have actual diffs on the rear axle. This build gets a ton of points for the suspension, though. Somewhere out there on the Internet, there is the concept of the perfect Power Wheels conversion. There might be a drive shaft instead of a drive chain, there might be an electrical system, and someone might have figured out how someone over the age of 12 can fit comfortably in a Power Wheels Jeep. No one has done it yet.

AI is taking away our free speech! Free speech, as you’re all aware, applies to all speech in all forms, in all venues. Except you specifically can’t yell fire in a movie theater, that’s the one exception. Now AI researchers are treading on your right to free speech, an affront to the Gadsden flag flying over our compound and the ‘no step on snek’ patch on our tactical balaclava, with a Chrome plugin. This plugin filter’s ‘toxic’ comments with AI, but there’s an unintended consequence: people want need to read what I have to say, and this will filter it out! The good news is that it doesn’t work on Hackaday because our commenting system is terrible.

This week was the 30th anniversary of the World Wide Web, first proposed on March 11, 1989 by Tim Berners-Lee. The web, and to a greater extent, the Internet, is the single most impactful invention of the last five hundred years; your overly simplistic view of world history can trace modern western hegemony and the reconnaissance to Gutenberg’s invention of the printing press, and so it will be true with the Internet. Tim’s NeXT cube, in a case behind glass at CERN, will be viewed with the same reverence as Gutenberg’s first printing press (if it had survived, but you get where I’m going with this). Five hundred years from now, the major historical artifact from the 20th century will be a NeXT cube, that was, coincidentally, made by Steve Jobs. If you want to get your hands on a NEXT cube, be prepared to pony up, but Adafruit has a great authorial for running Openstep on a virtual machine. If you want the real experience, you can pick up a NeXT keyboard and mouse relatively cheaply.

Sometimes you need an RCL box, so here’s one on Kickstarter. Yeah, it’s kind of expensive. Have you ever bought every value of inductor?

A Game Boy Supercomputer For AI Research

Reinforcement learning has been a hot-button area of research into artificial intelligence. This is a method where software agents make decisions and refine these over time based on analyzing resulting outcomes. [Kamil Rocki] had been exploring this field, but needed some more powerful tools. As it turned out, a cluster of emulated Game Boys running at a billion FPS was just the ticket.

The trick to efficient development of reinforcement learning systems is to be able to run things quickly. If it takes an AI one thousand attempts to clear level 1 of Super Mario Bros., you’d better hope you’re not running that in real time. [Kamil] started by coding a Game Boy emulator in C. By then implementing it in Verilog, [Kamil] was able to create a cluster of emulated Game Boys that enabled games to be run at breakneck speed, greatly speeding the training and development process.

[Kamil] goes into detail about how the work came to revolve around the Game Boy platform. After initial work with the Atari 2600, which is somewhat of a defacto standard in RL circles, [Kamil] began to explore further. It was desired to have an environment with a well-documented CPU,  a simple display to cut down on the preprocessing required, and a wide selection of games.

The goal of the project is to allow [Kamil] to explore the transfer of knowledge from one game to another in RL systems. The aim is to determine whether for an AI, skills at Metroid can help in Prince of Persia, for example. This is arguably true for human players, but it remains to be seen if this can be carried over for RL systems.

It’s rather advanced work, on both a hardware emulation level and in terms of AI research. Similar work has been done, training a computer to play Super Mario through monitoring score and world values. We can’t wait to see where this research leads in years to come.