Chatterbox Voice Assistant Knows To Keep Quiet For Privacy

Cruising through the children’s hands-on activity zone at Maker Faire Bay Area, we see kids building a cardboard enclosure for the Chatterbox smart speaker kit. It would be tempting to dismiss the little smiling box as “just for kids” but doing so would overlook something more interesting: an alternative to data-mining corporations who dominate the smart speaker market. People are rightly concerned about Amazon Echo and Google Home, always-listening devices for online retail sending data back to their corporate data centers. In order to be appropriate for children, Chatterbox is none of those things. It only listens when a button is pressed, and its online model is designed to support the mission of CCFC (Campaign for a Commercial-Free Childhood.)

Getting started with a Chatterbox is much like other products designed to encourage young makers. The hardware — Raspberry Pi, custom HAT, speaker and button inside a cardboard enclosure — is conceptually similar to a Google AIY Voice kit but paired with an entirely different software experience. Instead of signing in to a Google developer account, children create their own voice interaction behavior with a block-based programming environment resembling MIT Scratch. Moving online, Chatterbox interactions draw upon resources of similarly privacy-minded entities like DuckDuckGo web search. Voice interaction foundation is built upon a fork of Mycroft with changes focused on education and child-friendliness. If a Chatterbox is unsure whether a query was for “Moana” or “Marijuana”, it will decide in favor of the Disney movie.

Many of these privacy-conscious pieces are open source or freely available, but Chatterbox pulls them all together into a single package that’s an appealing alternative to the big brand options. Based on conversations during Hackaday’s Maker Faire meetup, there’s a market beyond parents of young children. From technically aware adults who lack web API coding skills, to senior citizens unaware of dark corners of the web. Chatterbox Kickstarter campaign has a few more weeks to run but has already reached funding goals. We look forward to having a privacy-minded option in voice assistants.

Little Lamp To Learn Longer Leaps

Reinforcement learning is a subset of machine learning where the machine is scored on their performance (“evaluation function”). Over the course of a training session, behavior that improved final score is positively reinforced gradually building towards an optimal solution. [Dheera Venkatraman] thought it would be fun to use reinforcement learning for making a little robot lamp move. But before that can happen, he had to build the hardware and prove its basic functionality with a manual test script.

Inspired by the hopping logo of Pixar Animation Studios, this particular form of locomotion has a few counterparts in the natural world. But hoppers of the natural world don’t take the shape of a Luxo lamp, making this project an interesting challenge. [Dheera] published all of his OpenSCAD files for this 3D-printed lamp so others could join in the fun. Inside the lamp head is a LED ring to illuminate where we expect a light bulb, while also leaving room in the center for a camera. Mechanical articulation servos are driven by a PCA9685 I2C PWM driver board, and he has written and released code to interface such boards with Robot Operating System (ROS) orchestrating our lamp’s features. This completes the underlying hardware components and associated software foundations for this robot lamp.

Once all the parts have been printed, electronics wired, and everything assembled, [Dheera] hacked together a simple “Hello World” script to verify his mechanical design is good enough to get started. The video embedded after the break was taken at OSH Park’s Bring-A-Hack afterparty to Maker Faire Bay Area 2019. This motion sequence was frantically hand-coded in 15 minutes, but these tentative baby hops will serve as a great baseline. Future hopping performance of control algorithms trained by reinforcement learning will show how far this lamp has grown from this humble “Hello World” hop.

[Dheera] had previously created the shadow clock and is no stranger to ROS, having created the ROS topic text visualization tool for debugging. We will be watching to see how robot Luxo will evolve, hopefully it doesn’t find a way to cheat! Want to play with reinforcement learning, but prefer wheeled robots? Here are a few options.

Continue reading “Little Lamp To Learn Longer Leaps”

Great Hacks At Our Maker Faire Bay Area Meetup; From Helmets And Goggles To Rovers And String

When Maker Faire Bay Area closed down early Saturday evening, the fun did not stop: there’s a strong pool of night owls among the maker demographic. When the gates close, the after-parties around San Mateo run late into the night, and Hackaday’s meetup is a strong favorite.

This year Hackaday and Tindie joined forces with Kickstarter and moved our combined event to B Street Station, a venue with more space for hacks than previous years. The drinks started flowing, great people started chatting, basked in an ever present glow of LEDs. A huge amount of awesome hardware showed up, so let’s take a look the demos and stunts that came out to play.

Continue reading “Great Hacks At Our Maker Faire Bay Area Meetup; From Helmets And Goggles To Rovers And String”

Use Movie Tools To Make Your Robot Move Like Movie Robots

Robots of the entertainment industry are given life by character animation, where the goal is to emotionally connect with the audience to tell a story. In comparison, real-world robot movement design focus more on managing physical limitations like sensor accuracy and power management. Tools for robot control are thus more likely to resemble engineering control consoles and not artistic character animation tools. When the goal is to build expressive physical robots, we’ll need tools like ROBiTS project to bridge the two worlds.

As an exhibitor at Maker Faire Bay Area 2019, this group showed off their first demo: a plugin to Autodesk Maya that translate joint movements into digital pulses controlling standard RC servos. Maya can import the same STL files fed to 3D printers, easily creating a digital representation of a robot. Animators skilled in Maya can then use all the tools they are familiar with, working in full context of a robot’s structure in the digital world. This will be a far more productive workflow for animation artists versus manipulating a long flat list of unintuitive slider controls or writing code by hand.

Of course, a virtual world offers some freedoms that are not available in the physical world. Real parts are not allowed to intersect, for one, and then there are other pesky physical limitations like momentum and center of gravity. Forgetting to account for them results in a robot that falls over! One of the follow-up projects on their to-do list is a bridge in the other direction: bringing physical world sensor like an IMU into digital representations in Maya.

We look forward to seeing more results on their YouTube channel. They join the ranks of other animated robots at Maker Faire and a promising addition to the toolbox for robot animation from Disney Research’s kinetic wires to Billy Whiskers who linked servos to Adobe Animate.

Continue reading “Use Movie Tools To Make Your Robot Move Like Movie Robots”

Nvidia Teaching Robots To Master IKEA Kitchens

The current wave of excitement around machine learning kicked off when graphics processors were repurposed to make training deep neural networks practical. Nvidia found themselves the engine of a new revolution and seized their opportunity to help push frontiers of research. Their research lab in Seattle will focus on one such field: making robots smart enough to work alongside humans in an IKEA kitchen.

Today’s robots are mostly industrial machines that require workspaces designed for robots. They run day and night, performing repetitive tasks, usually inside cages to keep squishy humans out of harm’s way. Robots will need to be a lot smarter about their surroundings before we could safely dismantle those cages. While there are some industrial robots making a start in this arena, they have a hard time justifying their price premium. (Example: financial difficulty of Rethink Robotics, who made the Baxter and Sawyer robots.)

So there’s a lot of room for improvement in this field, and this evolution will need a training environment offering tasks of varying difficulty levels for robots. Anywhere from the rigorous structured environment where robots work well today, to a dynamic unstructured environment where robots are hopelessly lost. Lab lead Dr. Dieter Fox explained how a kitchen is ideal. A meticulously cleaned and organized kitchen is very similar to an industrial setting. From there, we can gradually make a kitchen more challenging for a robot. For example: today’s robots can easily pick up a can with its rigid regular shape, but what about a half-full bag of flour? And from there, learn to pick up a piece of fresh fruit without bruising it. These tasks share challenges with many other tasks outside of a kitchen.

This isn’t about building a must-have home cooking robot, it’s about working through the range of challenges shared with common kitchen tasks. The lab has a lot of neat hardware, but its success will be measured by the software, and like all research, published results should be reproducible by other labs. You don’t have a high-end robotics lab in your house, but you do have a kitchen. That’s why it’s not just any kitchen, but an IKEA kitchen, to take advantage of the fact they are standardized, affordable, and available around the world for other robot researchers to benchmark against.

Most of us can experiment in a kitchen, IKEA or not. We have access to all the other tools we need: affordable AI hardware from Google, from Beaglebone, and from Nvidia. And we certainly have no shortage of robot arms and manipulators on these pages, ranging from a small laser-cut MeArm to our 2018 Hackaday Prize winner Dexter.

Volkswagen EGon Is A Rolling Electric Car Circuit Sculpture

Over the past few decades of evolution, cars have grown to incorporate a mind-boggling number of electric components. From parking distance sensors, to the convenience of power locks and windows, to in-car entertainment systems rivaling home theaters. Normally this interconnected system’s complexity is hidden between exterior sheet metal and interior plastic trim, but a group of students of Volkswagen’s vocational training program decided to show off their internal beauty by building the Volkswagen eGon exhibit.

Seeing a super minimalist Volkswagen electric Golf on the move (short Twitter video embedded below) we are immediately reminded of circuit sculptures. We saw some great projects in our circuit sculpture contest, but the eGon shows what can be done with the resources of a Volkswagen training center. Parts are bolted to the car’s original structure where possible, the rest were held in their representative positions by thin metal tube frames. At this scale, they look just like the brass rods used in small circuit sculptures! Certain component enclosures were replaced with transparent pieces, or had a window cut into them for visibility.

This exhibit was built for IdeenExpo, an event to expose students to science and technology. Showing them what’s under the cover in this “see-through car” with internal components tagged with QR codes pointing them to additional information. The number of electronic modules inside a car is only going to continue rising with the coming wave of electric and/or self-driving cars. Even if the timing of their arrival is debatable, we know we’ll need brain power helping to answer questions we don’t even know to ask yet. The eGon is doing a great job attracting attention and inviting bright young minds to participate.

Continue reading “Volkswagen EGon Is A Rolling Electric Car Circuit Sculpture”