Robots Learning To Understand Their Surroundings

Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]

When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.

A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)

The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.

[IROS 2020 Presentation video (duration 10:51) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below.]

Continue reading “Robots Learning To Understand Their Surroundings”

Quadcopter With Tensegrity Shell Takes A Beating And Gets Back Up

Many of us have become familiar with the distinctive sound of multirotor toys, a sound frequently punctuated by sharp sounds of crashes. We’d then have to pick it up and repair any damage before flying fun can resume. This is fine for a toy, but autonomous fliers will need to shake it off and get back to work without human intervention. [Zha et al.] of UC Berkeley’s HiPeRLab have invented a resilient design to do so.

We’ve seen increased durability from flexible frames, but that left the propellers largely exposed. Protective bumpers and cages are not new, either, but this icosahedron (twenty sided) tensegrity structure is far more durable than the norm. Tests verified it can survive impact with a concrete wall at speed of 6.5 meters per second. Tensegrity is a lot of fun to play with, letting us build intuition-defying structures and here tensegrity elements dissipate impact energy, preventing damage to fragile components like propellers and electronics.

But surviving an impact and falling to the ground in one piece is not enough. For independent operation, it needs to be able to get itself back in the air. Fortunately the brains of this quadcopter has been taught the geometry of an icosahedron. Starting from the face it landed on, it can autonomously devise a plan to flip itself upright by applying bursts of power to select propeller motors. Rotating itself face by face, working its way to an upright orientation for takeoff, at which point it is back in business.

We have a long way to go before autonomous drone robots can operate safely and reliably. Right now the easy answer is to fly slowly, but that also drastically cuts into efficiency and effectiveness. Having flying robots that are resilient against flying mistakes at speed, and can also recover from those mistakes, will be very useful in exploration of aerial autonomy.

[IROS 2020 Presentation video (duration 14:16) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below]

Continue reading “Quadcopter With Tensegrity Shell Takes A Beating And Gets Back Up”

Flexible Actuators Spring Into Action

Most experiments in flexible robot actuators are based around pneumatics, but [Ayato Kanada] and [Tomoaki Mashimo] has been working on using a coiled spring as the moving component of a linear actuator. Named the flexible ultrasonic motor (FUSM), [Yunosuke Sato] built on top of their work and assembled a pair of FUSM into a closed-loop actuator with motion control in two dimensions.

A single FUSM is pretty interesting by itself, its coiled spring is the only mechanical moving part. An earlier paper published by [Kanada] and [Mashimo] laid out how to push the spring through a hole in a metal block acting as the stator of this motor. Piezoelectric devices attached to that block minutely distorts it in a controlled manner resulting in linear motion of the spring.

For closed-loop feedback, electrical resistance from the free end of the spring to the stator block can be measured and converted to linear distance to within a few millimeters. However, the acting end of the spring might be deformed via stretching or bending, which made calculating its actual position difficult. Accounting for such deformation is a future topic for this group of researchers.

This work was presented at IROS2020 which like many other conferences this year, moved online and became IROS On-Demand. After a no-cost online registration we can watch the 12-minute recorded presentation on this project or any other at the conference. The video includes gems such as an exaggerated animation of stator block deformation to illustrate how a FUSM works, and an example of the position calculation challenge where the intended circular motion actually resulted in an oval.

Speaking of conferences that have moved online, we have our own Hackaday Remoticon coming up soon!

Continue reading “Flexible Actuators Spring Into Action”

Short Video Recaps A Long Tradition Of Space Hacks

Human spaceflight has always been, and still remains, a risky endeavor. We mitigate risk by being as prepared as we can. Every activity is planned, reviewed, and practiced long before any rocket engines are ignited. But space has a history of not cooperating with plans, and thus there is a corresponding history of hacks to get missions back on track. YouTube space fan [Scott Manley] recaps some of his favorites in How a $2 Toothbrush Saved the ISS and Other Unbelievable Space Hacks.

The introduction explained this compilation was motivated by the latest International Space Station drama, where an elusive air leak has finally been tracked down. Air leaks are obviously much more worrying in a space station than in, say, a bicycle tire. Thus there exists a wide array of tools to track down leaks but they couldn’t find this one. Reportedly the breakthrough came from an improvised airflow visualization tool: leaves from a cut-open tea bag. Normally small floating particles are forbidden in space because they might end up in troublesome places. (Eyes, noses, onboard equipment…) Apparently the necessity of the hack outweighed the rules here.

Tea leaves are but the latest in a long line of hacks devised in the course of space missions, because things don’t always go according to the original plan. Or even any of the large volume of contingency plans. Solutions have to be cobbled together from resources on hand, because when we’re in space, what we brought is all we have. From directly editing production code during Apollo 14, to a field-built replacement fender for the Apollo 17 Lunar Rover Vehicle (top picture), to the $2 toothbrush pressed into service as metal debris cleaner. The mission must go on!

Continue reading “Short Video Recaps A Long Tradition Of Space Hacks”

Escape To An Alternate Reality Anywhere With Port-A-Vid

There was a time when only the most expensive televisions could boast crystal clear pixels on a wall-mountable thin screen. What used to be novelty from “High Definition Flat Screen Televisions are now just “TV” available everywhere. So as a change of pace from our modern pixel perfection, [Emily Velasco] built the Port-A-Vid as a relic from another timeline.

The centerpiece of any aesthetically focused video project is obviously the screen, and a CRT would be the first choice for a retro theme. Unfortunately, small CRTs have recently become scarce, and a real glass picture tube would not fit within the available space anyhow. Instead, we’re actually looking at a modern LCD sitting behind a big lens to give it an old school appearance.

The lens, harvested from a rear-projection TV, was chosen because it was a good size to replace the dial of a vacuum gauge. This project enclosure started life as a Snap-On Tools MT425 but had become just another piece of broken equipment at a salvage yard. The bottom section, formerly a storage bin for hoses and adapters, is now home to the battery and electronics. All original markings on the hinged storage lid were removed and converted to the Port-A-Vid control panel.

A single press of the big green button triggers a video to play, randomly chosen from a collection of content [Emily] curated to fit with the aesthetic. We may get a clip from an old educational film, or something shot with a composite video camera. If any computer graphics pop up, they will be primitive vector graphics. This is not the place to seek ultra high definition content.

As a final nod to common artifacts of electronics history, [Emily] wrote an user’s manual for the Port-A-Vid. Naturally it’s not a downloadable PDF, but a stack of paper stapled together. Each page written in the style of electronics manuals of yore, treated with the rough look of multiple generation photocopy rumpled with use.

If you have to ask “Why?” it is doubtful any explanation would suffice. This is a trait shared with many other eclectic projects from [Emily]. But if you are delighted by fantastical projects hailing from an imaginary past, [Emily] has also built an ASCII art cartridge for old parallel port printers.

Continue reading “Escape To An Alternate Reality Anywhere With Port-A-Vid”

ExoMy Is A Miniature European Mars Rover With A Friendly Face

Over the past few weeks, a new season of Mars fever kicked off with launches of three interplanetary missions. And since there’s a sizable overlap between fans of spaceflight and those of electronics and 3D printing, the European Space Agency released the ExoMy rover for those who want to experience a little bit of Mars from home.

ExoMy’s smiling face and cartoonish proportions are an adaptation of ESA’s Rosalind Franklin (formerly the ExoMars) rover which, if 2020 hadn’t turned out to be 2020, would have been on its way to Mars as well. While Rosalind Franklin must wait for the next Mars launch window, we can launch ExoMy missions to our homes now. Like the real ESA rover, ExoMy has a triple bogie suspension design distinctly different from the rocker-bogie design used by NASA JPL’s rover family. Steering all six wheels rather than just four, ExoMy has maneuvering chops visible in a short Instagram video clip (also embedded after the break).

ExoMy’s quoted price of admission is in the range of 250-500€. Perusing instructions posted on GitHub, we see an electronics nervous system built around a Raspberry Pi. Its published software stack is configured for human remote control, but as it is already running ROS (Robot Operating System), it should be an easy on-ramp for ExoMars builders with the ambition of adding autonomy.

ExoMy joins the ranks of open source rover designs available to hackers with 3D printing, electronics, and software skills. We recently covered a much larger rover project modeled after Curiosity. Two years ago NASA JPL released an open source rover of their own targeting educators, inspiring this writer’s own Sawppy rover project, which is in turn just one of many projects tagged “Rover” on Hackaday.io. Hackers love rovers!


VR Technology Helps Bring A Galaxy Far, Far Away To Our TV

Virtual reality is usually an isolated individual experience very different from the shared group experience of a movie screen or even a living room TV. But those worlds of entertainment are more closely intertwined than most audiences are aware. Video game engines have been taking a growing role in film and television production behind the scenes, and now they’re stepping out in front of the camera in a big way for making The Mandalorian TV series.

Big in this case is a three-quarters cylindrical LED array 75 ft (23 m) in diameter and 20 ft (6 m) high. But the LEDs covering its walls and ceiling aren’t pointing outwards like some installation for Times Square. This setup, called the Volume, points inward to display background images for camera and crew working within. It’s an immersive LED backdrop and stage environment.

Incorporating projected imagery on stage is a technique going at least as far back as 1933’s King Kong, but it is very limited. Lighting and camera motion has to be very constrained in order to avoid breaking the fragile illusion. More recently, productions have favored green screens replaced with computer imagery in post production. It removed most camera motion and lighting constraints, but costs a lot of money and time. It is also more difficult for actors to perform their roles convincingly against big blank slabs of green. The Volume solves all of those problems by putting computer-generated imagery on set, rendered in real time via video game engine Unreal.

Continue reading “VR Technology Helps Bring A Galaxy Far, Far Away To Our TV”