Dexter The Companion Bot Wants To Give You Five

The main character of Dexter’s Laboratory is a genius child inventor who inspired a lot of fans to become makers and inventors in their own right. [Jorvon Moss] a.k.a. [Odd_Jayy] counts himself as one of them. A serial companion bot builder, his projects are constantly evolving. But every once in a while he pauses long enough to share construction details. Like how we can build our own monkey companion bot Dexter named after the cartoon.

A slightly earlier iteration of Dexter attended Hackaday Superconference 2019. Perched on [Odd_Jayy]’s back, Dexter joined in a presentation on companion bots. We’ve been a fan of his work since Asi the robot spider and several more robots have been posted online since. Recently at Virtually Maker Faire 2020, he joined [Alex Glow] and [Angela Sheehan] to talk about their respective experiences Making Companion Bots.

Sketchbook pages with Dexter concept drawings[Odd_Jayy] starts with sketches to explore how a project will look and act, striving to do something new and interesting every time. One of Dexter’s novelties is adding interactivity to companion bots. Historically people couldn’t do much more than just look at a companion bot, but Dexter can high five their fans! Sometimes the excited robot monkey ends up slapping [Odd_Jayy] instead, but they’re working through issues in their relationship. Everyone is invited to see rapid cycles of iterative improvements on Twitter and Instagram. As of this writing, a mini Dexter is underway with design elements similar to the “Doc Eyes” goggle project running in parallel. It’s always fun to watch these creations evolve. And by openly sharing his projects both online and off, [Odd_Jayy] is certainly doing his part to inspire the next wave of makers and inventors.

Displaying HTML Interfaces And Managing Network Nodes… In Space!

The touchscreen interface aboard SpaceX Crew Dragon is just one of its many differences from past space vehicles, but those big screens make an outsized visual impact. Gone are panels filled with indicator needles in gauges, or endless rows of toggle switches. It looked much like web interaction on everyday tablets for good reason: what we see is HTML and JavaScript rendered by the same software core underlying Google’s Chrome browser. This and many other details were covered in a Reddit Ask Me Anything with members of the SpaceX software team.

Various outlets have mentioned Chromium in this context, but without answering the obvious follow-up question: how deep does Chromium go? In this AMA we learn it does not go very deep at all. Chromium is only the UI rendering engine, their fault tolerant flight software interaction is elsewhere. Components such as Chromium are isolated to help keep system behavior predictable, so a frozen tab won’t crash the capsule. Somewhat surprisingly they don’t use a specialized real-time operating system, but instead a lightly customized Linux built with PREEMPT_RT patches for better real-time behavior.

In addition to Falcon rocket and Dragon capsule, this AMA also covered software work for Starlink which offered interesting contrasts in design tradeoffs. Because there are so many satellites (and even more being launched) loss of individual spacecraft is not a mission failure. This gives them elbow room for rapid iteration, treating the constellation more like racks of servers in a datacenter instead of typical satellite operations. Where the Crew Dragon code has been frozen for several months, Starlink code is updated rapidly. Quickly enough that by the time newly launched Starlink satellites reach orbit, their code has usually fallen behind the rest of the constellation.

Finally there are a few scattered answers outside of space bound code. Their ground support displays (visible in Hawthorne mission control room) are built with LabVIEW. They also confirmed that contrary to some claims, the SpaceX ISS docking simulator isn’t actually running the same code as Crew Dragon. Ah well.

Anyone interested in what it takes to write software for space would enjoy reading through these and other details in the AMA. And since it had a convenient side effect of serving as a recruiting event, there are plenty of invitations to apply if anyone has ambitions to join the team. We certainly can’t deny the attraction of helping to write the next chapter in human spaceflight.

[Photo credit: SpaceX]

Assemble Your (Virtual) Robotic Underground Exploration Team

It’s amazing how many things have managed to move online in recent weeks, many with a beneficial side effect of eliminating travel making them more accessible to everyone around the world. Though some events had a virtual track before it was cool, among them the DARPA Subterranean Challenge (SubT) robotics competition. Recent additions to their “Hello World” tutorials (with promise of more to come) have continued to lower the barrier of entry for aspiring roboticists.

We all love watching physical robots explore the real world, which is why SubT’s “Systems Track” gets most of the attention. But such participation is necessarily restricted to people who have the resources to build and transport bulky hardware to the competition site, which is just a tiny subset of all the brilliant minds who can contribute. Hence the “Virtual Track” which is accessible to anyone with a computer that meets requirements. (64-bit Ubuntu 18 with NVIDIA GPU) The tutorials help get us up and running on SubT’s virtual testbed which continues to evolve. With every round, the organizers work to bring the virtual and physical worlds closer together. During the recent Urban Circuit, they made high resolution scans of both the competition course as well as participating robots.

There’s a lot of other traffic on various SubT code repositories. Motivated by Bitbucket sunsetting their Mercurial support, SubT is moving from Bitbucket to GitHub and picking up some housecleaning along the way. Together with the newly added tutorials, this is a great time to dive in and see if you want to assemble a team (both of human collaborators and virtual robots) to join in the next round of virtual SubT. But if you prefer to stay an observer of the physical world, enjoy this writeup with many fun details on systems track robots.

A More Open Raspberry Pi Camera Stack With Libcamera

As open as the Raspberry Pi Foundation has been about their beloved products, they would be the first to admit there’s always more work to be done: Getting a Pi up and running still requires many closed proprietary components. But the foundation works to chip away at it bit by bit, and one of the latest steps is the release of a camera stack built on libcamera.

Most Linux applications interact with the camera via V4L2 or a similar API. These established interfaces were designed back when camera control was limited and consisted of a few simple hardware settings. Today we have far more sophisticated computational techniques for digital photography and video. Algorithms have outgrown dedicated hardware, transforming into software modules that take advantage of CPU and/or GPU processing. In practice, this trend meant bigger and bigger opaque monolithic pieces of proprietary code. Every one a mix of “secret sauce” algorithms commingling with common overhead code wastefully duplicated for each new blob.

We expect camera makers will continue to devise proprietary specialties as they seek a competitive advantage. Fortunately, some of them see benefit in an open-source framework to help break up those monoliths into more manageable pieces, letting them focus on just their own specialized parts. Leveraging something like libcamera for the remainder can reduce their software development workload, leading to faster time to market, lower support cost, and associated benefits to the bottom line that motivates adoption by corporations.

But like every new interface design borne of a grandiose vision, there’s a chicken-and-egg problem. Application developers won’t consume it if there’s no hardware, and hardware manufacturers won’t implement it if no applications use it. For the consumer side, libcamera has modules to interop with V4L2 and other popular interfaces. For the hardware side, it would be useful to have a company with wide reach who believes it is useful to open what they can and isolate the pieces they can’t. This is where the Raspberry Pi foundation found a fit.

The initial release doesn’t support their new High-Quality Camera Module though that is promised soon. In the short term, there is still a lot of work to be done, but we are excited about the long term possibilities. If libcamera can indeed lower the barrier to entry, it would encourage innovation and expanding the set of cameras beyond the officially supported list. We certainly have no shortage of offbeat camera sensor ideas around here, from a 1-kilopixel camera sensor to a decapped DRAM chip.

[via Hackster.io]

Under The Hood Of Second Reality, PC Demoscene Landmark

In 1993, IBM PCs & clones were a significant but not dominant fraction of the home computer market. They were saddled with the stigma of boring business machines. Lacking Apple Macintosh’s polish, unable to match Apple II’s software library, and missing Commodore’s audio/visual capabilities. The Amiga was the default platform of choice for impressive demos, but some demoscene hackers saw the PC’s potential to blow some minds. [Future Crew] was such a team, and their Second Reality accomplished exactly that. People who remember and interested in a trip back in time should take [Fabien Sanglard]’s tour of Second Reality source code.

We recently covered another impressive PC demo executed in just 256 bytes, for which several commenters were thankful the author shared how it was done. Source for demos aren’t necessarily released: the primary objective being to put on a show, and some authors want to keep a few tricks secret. [Future Crew] didn’t release source for Second Reality until 20th anniversary of its premiere, by which time it was difficult to run on a modern PC. Technically it is supported by DOSBox but rife with glitches, as Second Reality uses so many nonstandard tricks. The easiest way to revisit nostalgia is via video captures posted to YouTube (one embedded below the break.)

A PC from 1993 is primitive by modern standards. It was well before the age of GPUs. In fact before any floating point hardware was commonplace: Intel’s 80387 math co-processor was a separate add-on to the 80386 CPU. With the kind of hardware at our disposal today it can be hard to understand what a technical achievement Second Reality was. But PC users of the time understood, sharing it and dropping jaws well beyond the demoscene community. Its spread was as close to “going viral” as possible when “high speed data” was anything faster than 2400 baud.

Many members of [Future Crew] went on to make impact elsewhere in the industry, and their influence spread far and wide. But PC graphics wasn’t done blowing minds in 1993 just yet… December 10th of that year would see the public shareware release of a little thing called Doom.

Continue reading “Under The Hood Of Second Reality, PC Demoscene Landmark”

Pouring Creativity Into Musical Upcycling Of Plastic Bottles

Convenient and inexpensive, plastic beverage bottles are ubiquitous in modern society. Many of us have a collection of empties at home. We are encouraged to reduce, reuse, and recycle such plastic products and [Kaboom Percussion] playing Disney melodies on their Bottlephone 2.0 (video embedded below) showcases an outstanding melodic creation for the “reuse” column.

Details of this project are outlined in a separate “How we made it” video (also embedded below). Caps of empty bottles are fitted with commodity TR414 air valves. The pitch of each bottle is tuned by adjusting pressure. Different beverage brands were evaluated for pleasing tone of their bottles, with the winners listed. Pressure levels going up to 70 psi means changes in temperature and inevitable air leakage makes keeping this instrument in tune a never-ending task. But that is a relatively simple mechanical procedure. What’s even more impressive on display is the musical performance talent of this team, assisted by some creative video editing. Sadly for us, such skill does not come in a bottle. Alcohol only makes us believe we are skilled without improving actual skill.

But that’s OK, this is Hackaday where we thrive on building machines to perform for us. We hope it won’t be long before a MIDI-controlled variant is built by someone, perhaps incorporating an air compressor for self-tuning capabilities. We’ve featured bottles as musical instruments before, but usually as wind instruments like this bottle organ or the fipple. This is a percussion instrument more along the lines of the wine glass organ. It’s great to see different combinations explored, and we are certain there are more yet to come.

Continue reading “Pouring Creativity Into Musical Upcycling Of Plastic Bottles”

Behind The Scenes Of Folding@Home: How Do You Fight A Virus With Distributed Computing?

A great big Thank You to everyone who answered the call to participate in Folding@Home, helping to understand proteins interactions of SARS-CoV-2 virus that causes COVID-19. Some members of the FAH research team hosted an AMA (Ask Me Anything) session on Reddit to provide us with behind-the-scenes details. Unsurprisingly, the top two topics are “Why isn’t my computer doing anything?” and “What does this actually accomplish?”

The first is easier to answer. Thanks to people spreading the word — like the amazing growth of Team Hackaday — there has been a huge infusion of new participants. We could see this happening on the leader boards, but in this AMA we have numbers direct from the source. Before this month there were roughly thirty thousand regular contributors. Since then, several hundred thousands more started pitching in. This has overwhelmed their server infrastructure and resulted in what’s been termed a friendly-fire DDoS attack.

The most succinct information was posted by a folding support forum moderator.

Here’s a summary of current Folding@Home situation :
* We know about the work unit shortage
* It’s happening because of an approximately 20x increase in demand
* We are working on it and hope to have a solution very soon.
* Keep your machines running, they will eventually fold on their own.
* Every time we double our server resources, the number of Donors trying to help goes up by a factor of 4, outstripping whatever we do.

Why don’t they just buy more servers?

The answer can be found on Folding@Home donation FAQ. Most of their research grants have restrictions on how that funding is spent. These restrictions typically exclude capital equipment and infrastructure spending, meaning researchers can’t “just” buy more servers. Fortunately they are optimistic this recent fame has also attracted attention from enough donors with the right resources to help. As of this writing, their backend infrastructure has grown though not yet caught up to the flood. They’re still working on it, hang tight!

Computing hardware aside, there are human limitations on both input and output sides of this distributed supercomputer. Folding@Home need field experts to put together work units to be sent out to our computers, and such expertise is also required to review and interpret our submitted results. The good news is that our contribution has sped up their iteration cycle tremendously. Results that used to take weeks or months now return in days, informing where the next set of work units should investigate.

Continue reading “Behind The Scenes Of Folding@Home: How Do You Fight A Virus With Distributed Computing?”