Hackaday Links: March 17, 2019

There’s now an official Raspberry Pi keyboard and mouse. The mouse is a mouse clad in pink and white plastic, but the Pi keyboard has some stuff going for it. It’s small, which is what you want for a Pi keyboard, and it has a built-in USB hub. Even Apple got that idea right with the first iMac keyboard. The keyboard and mouse combo are available for £22.00

A new Raspberry Pi keyboard and a commemorative 50p coin from the Royal Mint featuring the works of Stephen Hawking? Wow, Britain is tearing up the headlines recently.

Just because, here’s a Power Wheels Barbie Jeep with a 55 HP motor. Interesting things to note here is how simple this build actually is. If you look at some of the Power Wheels Racing cars, they have actual diffs on the rear axle. This build gets a ton of points for the suspension, though. Somewhere out there on the Internet, there is the concept of the perfect Power Wheels conversion. There might be a drive shaft instead of a drive chain, there might be an electrical system, and someone might have figured out how someone over the age of 12 can fit comfortably in a Power Wheels Jeep. No one has done it yet.

AI is taking away our free speech! Free speech, as you’re all aware, applies to all speech in all forms, in all venues. Except you specifically can’t yell fire in a movie theater, that’s the one exception. Now AI researchers are treading on your right to free speech, an affront to the Gadsden flag flying over our compound and the ‘no step on snek’ patch on our tactical balaclava, with a Chrome plugin. This plugin filter’s ‘toxic’ comments with AI, but there’s an unintended consequence: people want need to read what I have to say, and this will filter it out! The good news is that it doesn’t work on Hackaday because our commenting system is terrible.

This week was the 30th anniversary of the World Wide Web, first proposed on March 11, 1989 by Tim Berners-Lee. The web, and to a greater extent, the Internet, is the single most impactful invention of the last five hundred years; your overly simplistic view of world history can trace modern western hegemony and the reconnaissance to Gutenberg’s invention of the printing press, and so it will be true with the Internet. Tim’s NeXT cube, in a case behind glass at CERN, will be viewed with the same reverence as Gutenberg’s first printing press (if it had survived, but you get where I’m going with this). Five hundred years from now, the major historical artifact from the 20th century will be a NeXT cube, that was, coincidentally, made by Steve Jobs. If you want to get your hands on a NEXT cube, be prepared to pony up, but Adafruit has a great authorial for running Openstep on a virtual machine. If you want the real experience, you can pick up a NeXT keyboard and mouse relatively cheaply.

Sometimes you need an RCL box, so here’s one on Kickstarter. Yeah, it’s kind of expensive. Have you ever bought every value of inductor?

A Game Boy Supercomputer for AI Research

Reinforcement learning has been a hot-button area of research into artificial intelligence. This is a method where software agents make decisions and refine these over time based on analyzing resulting outcomes. [Kamil Rocki] had been exploring this field, but needed some more powerful tools. As it turned out, a cluster of emulated Game Boys running at a billion FPS was just the ticket.

The trick to efficient development of reinforcement learning systems is to be able to run things quickly. If it takes an AI one thousand attempts to clear level 1 of Super Mario Bros., you’d better hope you’re not running that in real time. [Kamil] started by coding a Game Boy emulator in C. By then implementing it in Verilog, [Kamil] was able to create a cluster of emulated Game Boys that enabled games to be run at breakneck speed, greatly speeding the training and development process.

[Kamil] goes into detail about how the work came to revolve around the Game Boy platform. After initial work with the Atari 2600, which is somewhat of a defacto standard in RL circles, [Kamil] began to explore further. It was desired to have an environment with a well-documented CPU,  a simple display to cut down on the preprocessing required, and a wide selection of games.

The goal of the project is to allow [Kamil] to explore the transfer of knowledge from one game to another in RL systems. The aim is to determine whether for an AI, skills at Metroid can help in Prince of Persia, for example. This is arguably true for human players, but it remains to be seen if this can be carried over for RL systems.

It’s rather advanced work, on both a hardware emulation level and in terms of AI research. Similar work has been done, training a computer to play Super Mario through monitoring score and world values. We can’t wait to see where this research leads in years to come.

This Cardboard Box Can Tell You What It Sees

It wasn’t that long ago that talking to computers was the preserve of movies and science fiction. Slowly, voice recognition improved, and these days it’s getting to be pretty usable. The technology has moved beyond basic keywords, and can now parse sentences in natural language. [Liz Meyers] has been working with the technology, creating WhatIsThat – an AI that can tell you what it’s looking at.

Adding a camera to Google’s AIY Voice Kit makes for a versatile object identification system.

The device is built around Google’s AIY Voice Kit, which consists of a Raspberry Pi with some additional hardware and software to enable it to process voice queries. [Liz] combined this with a Raspberry Pi camera and the Google Cloud Vision API. This allows WhatIsThat to respond to users asking questions by taking a photo, and then identifying what it sees in the frame.

It may seem like a frivolous project to those with working vision, but there is serious potential for this technology in the accessibility space. The device can not only describe things like animals or other objects, it can also read text aloud and even identify logos. The ability of the software to go beyond is impressive – a video demonstration shows the AI correctly identifying a Boston Terrier, and attributing a quote to Albert Einstein.

Artificial intelligence has made a huge difference to the viability of voice recognition – because it’s one thing to understand the words, and another to understand what they mean when strung together. Video after the break.

[Thanks to Baldpower for the tip!]

Continue reading “This Cardboard Box Can Tell You What It Sees”

Stethoscopes, Electronics, and Artificial Intelligence

For all the advances in medical diagnostics made over the last two centuries of modern medicine, from the ability to peer deep inside the body with the help of superconducting magnets to harnessing the power of molecular biology, it seems strange that the enduring symbol of the medical profession is something as simple as the stethoscope. Hardly a medical examination goes by without the frigid kiss of a stethoscope against one’s chest, while we search the practitioner’s face for a telltale frown revealing something wrong from deep inside us.

The stethoscope has changed little since its invention and yet remains a valuable if problematic diagnostic tool. Efforts have been made to solve these problems over the years, but only with relatively recent advances in digital signal processing (DSP), microelectromechanical systems (MEMS), and artificial intelligence has any real progress been made. This leaves so-called smart stethoscopes poised to make a real difference in diagnostics, especially in the developing world and under austere or emergency situations.

Continue reading “Stethoscopes, Electronics, and Artificial Intelligence”

Hackaday Links: February 17, 2019

There is a population of retrocomputing enthusiasts out there, whose basements, garages, and attics have been taken over by machines of years past. Most of the time, these people concentrate on one make; you’re an Apple guy, or you’re a Commodore guy, or you’re a Ford guy, or you’re a Chevy guy. The weirdos drive around with an MSX in the trunk of an RX7. This is the auction for nobody. NASA’s JPL Lab is getting rid of several tons of computer equipment, all from various manufacturers, and not very ‘vintage’ at all. Check out the list. There are CRT monitors from 2003, which means they’re great monitors that weigh as much as a person. There’s a lot of Sun equipment. If you’ve ever felt like cleaning up a whole bunch of trash for JPL, this is your chance. Grab me one of those sweet CRTs, though.

Last week, we published something on the ‘impossible’ tech behind SpaceX’s new engine. It was reasonably popular — actually significantly popular — and got picked up on Hacker News and one of the Elon-worshiping subreddits. Open that link in one tab. Now, open this link in another. Read along as a computer voice reads Hackaday words, all while soaking up YouTube ad revenue. What is our recourse? Does this constitute copyright infringement? I dunno; we don’t monetize videos on YouTube. Thanks to [MSeifert] for finding this.

Wanna see something funny? Check out the people in the comments below who are angry at a random YouTuber stealing Hackaday content, while they have an ad blocker on.

Teenage Engineering’s OP-1 is back in production. What is it and why does it matter? The OP-1 is a new class of synthesizer and sampler that kinda, sorta looks like an 80s Casio keyboard, but packed to the gills with audio capability. At one point, you could pick one of these up for $800. Now, prices are at about $1300, simply because production stopped for a while (for retooling, we’re guessing) and the rumor mill started spinning. The OP-1 is now back in production with a price tag of $1300. Wait. What? Yes, it’s another case study in marketing: the best way to find where the supply and demand curves cross is to stop production for a while, wait for the used resellers to do their thing, and then start production again with a new price tag that people are willing to pay. This is Galaxy Brain-level business management, people.

What made nerds angry this week? Before we get to that, we’re gonna have to back track a bit. In 2016, Motherboard published a piece that said PC Gaming Is Still Way Too Hard, because you have to build a PC. Those of us in the know realize that building a PC is as simple as buying parts and snapping them together like an expensive Lego set. It’s no big deal. A tech blog, named Motherboard, said building a PC was too hard. It isn’t even a crack at the author of the piece at this point: this is editorial decay.

And here we are today. This week, the Internet reacted to a video from The Verge on how to build a PC. The original video has been taken down, but the reaction videos are still up: here’s a good one, and here’s another. Now, there’s a lot wrong with the Verge video. They suggest using a Swiss army knife for the assembly, hopefully one with a Philips head screwdriver. Philips head screwdrivers still exist, by the way. Dual channel RAM was completely ignored, and way too much thermal compound was applied to the CPU. The cable management was a complete joke. Basically, a dozen people at The Verge don’t know how to build a PC. Are the criticisms of incompetence fair? Is this like saying [Doug DeMuro]’s car reviews are invalid because he can’t build a transmission or engine, from scratch, starting from a block of steel? Ehhh… we’re pretty sure [Doug] can change his own oil, at least. And he knows to use a screwdriver, instead of a Swiss army knife with a Philips head. In any event, here’s how you build a PC.

Hackaday writers to be replaced with AI. Thank you [Tegwyn] for the headline. OpenAI, a Musk and Theil-backed startup, is pitching a machine learning application that is aimed at replacing journalists. There’s a lot to unpack here, but first off: this already exists. There are companies that sell articles to outlets, and these articles are produced by ‘AI’. These articles are mostly in the sports pages. Sports recaps are a great application for ML and natural language processing; the raw data (the sports scores) are already classified, and you’re not looking for Pulitzer material in the sports pages anyway. China has AI news anchors, but Japan has Miku and artificial pop stars. Is this the beginning of the end of journalism as a profession, with all the work being taken over by machine learning algorithms? By vocation, I’m obligated to say no, but I have a different take on it. Humans can write better than AI, and the good ones are nearly as fast. Whether or not the readers care if a story is accurate or well-written is another story entirely. It will be market forces that determine if AI journalists take over, and if you haven’t been paying attention, no one cares if a news story is accurate or well written, only if it caters to their preexisting biases and tickles their confirmation bias.

Of course, you, dear reader, are too smart to be duped by such a simplistic view of media engagement. You’re better than that. You’re better than most people, in fact. You’re smart enough to see that most media is just placating your own ego and capitalizing on confirmation bias. That’s why you, dear reader, are the best audience. Please like, share, and subscribe for more of the best journalism on the planet.

NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like

Videogames have always existed in a weird place between high art and cutting-edge technology. Their consumer-facing nature has always forced them to be both eye-catching and affordable, while remaining tasteful enough to sit on retail shelves (both physical and digital). Running in real-time is a necessity, so it’s not as if game creators are able to pre-render the incredibly complex visuals found in feature films. These pieces of software constantly ride the line between exploiting the hardware of the future while supporting the past where their true user base resides. Each pixel formed and every polygon assembled comes at the cost of a finite supply of floating point operations today’s pieces of silicon can deliver. Compromises must be made.

Often one of the first areas in games that fall victim to compromise are environmental model textures. Maintaining a viable framerate is paramount to a game’s playability, and elements of the background can end up getting pushed to “the background”. The resulting look of these environments is somewhat more blurry than what they would have otherwise been if artists were given more time, or more computing resources, to optimize their creations. But what if you could update that ten-year-old game to take advantage of today’s processing capabilities and screen resolutions?

NVIDIA is currently using artificial intelligence to revise textures in many classic videogames to bring them up to spec with today’s monitors. Their neural network is able fundamentally alter how a game looks without any human intervention. Is this a good thing?

Continue reading “NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like”

AI on Raspberry Pi with the Intel Neural Compute Stick

I’ve always been fascinated by AI and machine learning. Google TensorFlow offers tutorials and has been on my ‘to-learn’ list since it was first released, although I always seem to neglect it in favor of the shiniest new embedded platform.

Last July, I took note when Intel released the Neural Compute Stick. It looked like an oversized USB stick, and acted as an accelerator for local AI applications, especially machine vision. I thought it was a pretty neat idea: it allowed me to test out AI applications on embedded systems at a power cost of about 1W. It requires pre-trained models, but there are enough of them available now to do some interesting things.

You can add a few of them in a hub for parallel tasks. Image credit Intel Corporation.

I wasn’t convinced I would get great performance out of it, and forgot about it until last November when they released an improved version. Unambiguously named the ‘Neural Compute Stick 2’ (NCS2), it was reasonably priced and promised a 6-8x performance increase over the last model, so I decided to give it a try to see how well it worked.

 

I took a few days off work around Christmas to set up Intel’s OpenVino Toolkit on my laptop. The installation script provided by Intel wasn’t particularly user-friendly, but it worked well enough and included several example applications I could use to test performance. I found that face detection was possible with my webcam in near real-time (something like 19 FPS), and pose detection at about 3 FPS. So in accordance with the holiday spirit, it knows when I am sleeping, and knows when I’m awake.

That was promising, but the NCS2 was marketed as allowing AI processing on edge computing devices. I set about installing it on the Raspberry Pi 3 Model B+ and compiling the application samples to see if it worked better than previous methods. This turned out to be more difficult than I expected, and the main goal of this article is to share the process I followed and save some of you a little frustration.

Continue reading “AI on Raspberry Pi with the Intel Neural Compute Stick”