Remoticon Video: Circuit Sculpture Workshop

Circuit Sculpture was one of our most anticipated workshops of Hackaday Remoticon 2020, and now it’s ready for those who missed it to enjoy. A beginning circuit sculptor could hardly ask for more than this workshop, which highlights three different approaches to building firefly circuit sculptures and is led by some of the most prominent people to ever bend brass and components to their will — Jiří Praus, Mohit Bhoite, & Kelly Heaton.

For starters, you’ll learn the different tools and techniques that each of them uses to create their sculptures. For instance, Kelly likes to use water-based clay to hold components in specific orientations while forming the sculpture and soldering it all together. Jiří and Mohit on the other hand tend to use tape. The point is that there is no right or wrong way, but to instead have all of these tips and tricks under your belt as you sculpt. And that’s what this workshop is really about.

Continue reading “Remoticon Video: Circuit Sculpture Workshop”

Shhh… Robot Vacuum Lidar Is Listening

There are millions of IoT devices out there in the wild and though not conventional computers, they can be hacked by alternative methods. From firmware hacks to social engineering, there are tons of ways to break into these little devices. Now, four researchers at the National University of Singapore and one from the University of Maryland have published a new hack to allow audio capture using lidar reflective measurements.

The hack revolves around the fact that audio waves or mechanical waves in a room cause objects inside a room to vibrate slightly. When a lidar device impacts a beam off an object, the accuracy of the receiving system allows for measurement of the slight vibrations cause by the sound in the room. The experiment used human voice transmitted from a simple speaker as well as a sound bar and the surface for reflections were common household items such as a trash can, cardboard box, takeout container, and polypropylene bags. Robot vacuum cleaners will usually be facing such objects on a day to day basis.

The bigger issue is writing the filtering algorithm that is able to extract the relevant information and separate the noise, and this is where the bulk of the research paper is focused (PDF). Current developments in Deep Learning assist in making the hack easier to implement. Commercial lidar is designed for mapping, and therefore optimized for reflecting off of non-reflective surface. This is the opposite of what you want for laser microphone which usually targets a reflective surface like a window to pick up latent vibrations from sound inside of a room.

Deep Learning algorithms are employed to get around this shortfall, identifying speech as well as audio sequences despite the sensor itself being less than ideal, and the team reports achieving an accuracy of 90%. This lidar based spying is even possible when the robot in question is docked since the system can be configured to turn on specific sensors, but the exploit depends on the ability to alter the firmware, something the team accomplished using the Dustcloud exploit which was presented at DEF CON in 2018.

You don’t need to tear down your robot vacuum cleaner for this experiment since there are a lot of lidar-based rovers out there. We’ve even seen open source lidar sensors that are even better for experimental purposes.

Thanks for the tip [Qes]

Robotics Club Teaches Soldering

Oregon State University must be a pretty good place to go to school if you want to hack on robots. Their robotics club, which looks active and impressive, has a multi-part video series on how to solder surface mount components that is worth watching. [Anthony] is the team lead for their Mars Rover team and he does the job with some pretty standard-looking tools.

The soldering station in use is a sub-$100 Aoyue with both a regular iron and hot air. There’s also a cheap USB microscope that looks like it has a screen, but is covered in blue tape to hold it to an optical microscope. So no exotic tools that you’d need a university affiliation to match.

Continue reading “Robotics Club Teaches Soldering”

Robots Can Finally Answer, Are You Talking To Me?

Voice Assistants, love them, or hate them, are becoming more and more commonplace. One problem for voice assistants is the situation of multiple devices listening in the same place. When a command is given, which device should answer? Researchers at CMU’s Future Interfaces Group [Karan Ahuja], [Andy Kong], [Mayank Goel], and [Chris Harrison] have an answer; smart assistants should try to infer if the user is facing the device they want to talk to. They call it direction-of-voice or DoV.

Currently, smart assistants use a simple race to see who heard it first. The reasoning is that the device you are closest to will likely hear it first. However, in situations with echos or when you’re equidistant from multiple devices, the outcome can seem arbitrary to a user.

The implementation of DoV uses an Extra-Trees Classifier from the python sklearn toolkit. Several other machine learning algorithms were considered, but ultimately efficiency won out and Extra-Trees was selected. Another interesting facet of the research was determining what facing really means. The team had humans ‘listeners’ stand in for smart assistants.  A ‘talker’ would speak the key phrase while the ‘listener’ determined if the talker was facing them or not. Based on their definition of facing, the system can determine if someone is facing the device with 90% accuracy that rises to 93% with per-room calibration.

Their algorithm as well as the data they collected has been open-sourced on GitHub. Perhaps when you’re building your own voice assistant, you can incorporate DoV to improve wake-word accuracy.

Continue reading “Robots Can Finally Answer, Are You Talking To Me?”

Robots Learning To Understand Their Surroundings

Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]

When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.

A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)

The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.

[IROS 2020 Presentation video (duration 10:51) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below.]

Continue reading “Robots Learning To Understand Their Surroundings”

Walmart Gives Up On Stock-Checking Robots

We’ve seen the Jetsons, Star Wars, and Silent Running. In the future, all the menial jobs will be done by robots. But Walmart is reversing plans to have six-foot-tall robots scan store shelves to check stock levels. The robots, from a company called Bossa Nova Robotics, apparently worked well enough and Walmart had promoted the idea in many investor-related events, promising that robot workers would reduce labor costs while better stock levels would increase sales.

So why did the retail giant say no to these ‘droids? Apparently, they found better ways to check stock and, according to a quote in the Wall Street Journal’s article about the decision, shoppers reacted negatively to sharing the aisle with the roving machines.

The robots didn’t just check stock. They could also check prices and find misplaced items. You can see a promotional video about the device below. Continue reading “Walmart Gives Up On Stock-Checking Robots”

Building Walks With Robot Legs

The Shanghai Evolution Shift company has just pulled off one of the most impressive robotic projects we’ve ever seen – making a building walk using 198 robotic legs. We’ve all seen structural relocation documentaries where large buildings are moved to new locations. This involves jacking up the building and installing a supporting platform on wheels, then carefully towing the building to its new site.

But the T shape of the five story, 7600 ton Lagena elementary school was problematic, and the route to the new site involved taking a curved path and rotating the building. This ruled out the more traditional methods of relocation. Robot legs came to the rescue. It took 18 days for the building to walk 62 meters and rotate 21 degrees to its new home. This project is part of a trend to preserve historic architecture rather than bulldoze everything to make space for modern buildings.

After watching the video below, we think you’ll agree that this is a unique application of robotics and an amazing engineering feat. Disclaimer – don’t try this at home. Thanks to [Chuckz] for sending us this tip.

Continue reading “Building Walks With Robot Legs”