Robots Learning To Understand Their Surroundings

Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]

When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.

A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)

The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.

[IROS 2020 Presentation video (duration 10:51) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below.]

Continue reading “Robots Learning To Understand Their Surroundings”

Today At Remoticon: Bring-a-Hack

Last Chance Tickets:

General admission tickets for this weekend’s Hackaday Remoticon are only available for two more hours! These are free, but you need to have one to get in on tonight’s Bring-a-hack.

Today’s Live Events:

The Community Bring-a-Hack meetup for all general admission ticket holders begins today at 16:00 PST. Check the ticketing hub page for your link to the event which is being held on Remo.

Live streaming events open to the public will begin on Saturday at 10:00 PST with open remarks and Kipp Bradford’s keynote talk. Workshops and the SMD Challenge will live-stream all day. And Alfred Jones will present his keynote at 18:30 PST followed by the Hackaday Prize Ceremony. Follow our media channels to be notified of all live streams:

This Week In Security: In The Wild, Through Your NAT, And Brave

Most of the stories from this week are vulnerabilities dropped before fixes are available, many of them actively being exploited. Strap yourselves in!

Windows Kernel Crypto

The first is CVE-2020-17087, an issue in the Windows Kernel Cryptography Driver. The vulnerable system calls are accessible from unprivileged user-space, and potentially even from inside sandboxed environments. The resulting buffer overflow can result in arbitrary code executing in the kernel context, meaning this is a quick jump to root-level control over a victim system.

What exactly is the code flaw here that’s being attacked? It’s in a bit of buffer allocation logic, inside a binary-to-hex conversion routine. The function accepts an unsigned short length argument. That value is used to calculate the output buffer size, by multiplying it by six, and using an unsigned short to hold that value. See the problem? A sufficiently large value will roll over, and the output buffer size will be too small. It’s a value overflow that leads to a buffer overflow.

Because the problem is being actively exploited, the report has been made public just seven days after discovery. The flaw is still unpatched in Windows 10, as of the time of writing. It also seems to be present as far back as Windows 7, which will likely not receive a fix, being out of support. [Editor’s snarky note: Thanks, closed-source software.] Continue reading “This Week In Security: In The Wild, Through Your NAT, And Brave”

Lunar Ark Boldly Goes

[Sebastian and Karl-Johan] are two award-winning Danish Space Architects who are subjecting themselves to harsh, seemingly uninhabitable conditions, for science. The pair set out to build a lunar base that could land with the manned Moon missions in 2024. Like any good engineering problem, what good is a solution without testing? So the pair have placed their habitat in a Moon Analogue habitat and are staying in their habitat for two months. They want to really feel the remoteness, the bitter cold, and the fatigue of actually being on the moon. So far they are about halfway through their journey and expect to return home in December 2020.

When asking themselves where on Earth is it most like the Moon, they came up with Moriusaq, Greenland. It’s cold, remote, in constant sunlight this time of year, and it is a vast white monochrome landscape just like the moon. The first moon settlement missions are expected to be at the South Pole of the Moon, as known as the Peak of Eternal Light.
The habitat itself is a testament to the duo’s ingenuity. The whole structure folds to fit the tight space and weight requirements of rockets. Taking 2.9m3 (102 ft3) when stored, it expands 560% in volume to 17.2m3 (607 ft3). In Greenland, the structure needs to withstand -30ºC (-22ºF) and 90 km/h winds.

Because the South Pole is in constant sunlight, the temperature varies much less there than on the rest of the Moon, which makes Greenland a very good analogue temperature-wise. The foldable skin is covered in solar panels, both on the top of the bottom. The highly reflective nature of the Moon’s surface makes it easy to capture the light bouncing up onto the bottom of the habitat.

Several other bits of technology have been included onboard, like a 3D printer, a circadian light stimulation system, an algae reactor, and a weather simulation. Since both the Moon and Greenland are in constant sunlight, the pod helps regulate the circadian rhythms of the occupants by changing the hue and brightness throughout the day. The weather simulation tries to break up the monotony of space by introducing weather like a stormy day or rainbow colours.

Their expedition is still ongoing and they post daily mission updates. While some might call their foray into the unknown madness, we call it bold. Currently, NASA is planning its Artemis mission in 2024 and we hope that the lessons learning from the Lunark and other experiments culminate in a better experience for all astronauts.

Walmart Gives Up On Stock-Checking Robots

We’ve seen the Jetsons, Star Wars, and Silent Running. In the future, all the menial jobs will be done by robots. But Walmart is reversing plans to have six-foot-tall robots scan store shelves to check stock levels. The robots, from a company called Bossa Nova Robotics, apparently worked well enough and Walmart had promoted the idea in many investor-related events, promising that robot workers would reduce labor costs while better stock levels would increase sales.

So why did the retail giant say no to these ‘droids? Apparently, they found better ways to check stock and, according to a quote in the Wall Street Journal’s article about the decision, shoppers reacted negatively to sharing the aisle with the roving machines.

The robots didn’t just check stock. They could also check prices and find misplaced items. You can see a promotional video about the device below. Continue reading “Walmart Gives Up On Stock-Checking Robots”