As a carnivorous plant, Venus flytraps have always been a fascinating subject of study. One of their many mysteries is how they differentiate an insect visit from less nutritious stimulants such as a windblown pebble. Now scientists are one step closer to deciphering the underlying mechanism, assisted by a new ability to visualize calcium changes in real time.
Calcium has long been suspected to play an important part in a Venus flytrap’s close/no-close decision process, but scientists couldn’t verify their hypothesis before. Standard chemical tests for calcium would require cutting the plant apart, which would only result in a static snapshot. The software analogy would be killing the process for a memory dump but unable to debug the process at runtime. There were tantalizing hints of a biological calcium-based analog computer at work, but mother nature had no reason to evolve JTAG test points on it.
Lacking in-circuit debug headers, scientists turned to the next best thing: add diagnostic indicator lights. But instead of blinking LEDs, genes were added to produce a protein that glows in the presence of calcium. Once successful, they could work with the engineered plants and get visual feedback. Immediately see calcium levels change and propagate in response to various stimuli over different time periods. Confirming that the trap snaps shut only in response to patterns of stimuli that push calcium levels beyond a threshold.
With these glowing proteins in place, researchers found that calcium explained some of the behavior but was not the whole picture. There’s something else, suspected to be a fast electrical network, that senses prey movement and trigger calcium release. That’ll be something to dig into, but at least we have more experience working with electrical impulses and not just for plants, either.
This holiday season, the video game industry hype machine is focused on building excitement for new PlayStation and Xbox consoles. Ten years ago, a similar chorus of hype reached a crescendo with the release of Xbox Kinect, promising to revolutionize how we play. That vision never panned out, but as [Daniel Cooper] of Engadget pointed out in a Kinect retrospective, it premiered consumer technologies that impacted fields far beyond gaming.
Kinect has since withdrawn from the gaming market, because as it turns out gamers are quite content with handheld controllers. This year’s new controllers for a PlayStation or Xbox would be immediately familiar to gamers from ten years ago. Even Nintendo, whose Wii is frequently credited as motivation for Microsoft to develop the Kinect, have arguably taken a step back with Joy-cons of their Switch.
But the Kinect’s success at bringing a depth camera to consumer price levels paved the way to explore many ideas that were previously impossible. The flurry of enthusiastic Kinect hacking proved there is a market for depth camera peripherals, leading to plug-and-play devices like Intel RealSense to make depth-sensing projects easier. The original PrimeSense technology has since been simplified and miniaturized into Face ID unlocking Apple phones. Kinect itself found another job with Microsoft’s HoloLens AR headset. And let’s not forget the upcoming wave of autonomous cars and drones, many of which will see their worlds via depth sensors of some kind. Some might even be equipped with the latest sensor to wear the Kinect name.
Inside the Kinect was also one of the earliest microphone arrays sold to consumers. Enabling the Kinect to figure out which direction a voice is coming from, and isolate it from other noises in the room. Such technology were previously the exclusive domain of expensive corporate conference room speakerphones, but now it forms the core of inexpensive home assistants like an Amazon Echo Dot. Raising the bar so much that hacks needed many more microphones just to stand out.
With the technology available more easily elsewhere, attrition of a discontinued device is reflected in the dwindling number of recent Kinect hacks on these pages. We still see a cool project every now and then, though. As the classic sensor bar itself recedes into history, others will take its place to give us depth sensing and smart audio. But for many of us, Kinect was the ambitious videogame peripheral that gave us our first experience.
Anyone who enjoys opening up consumer electronics knows iFixit to be a valuable resource, full of reference pictures and repair procedures to help revive devices and keep them out of electronic waste. Champions of reparability, they’ve been watching in dismay as the quest for thinner and lighter devices also made them harder to fix. But they wanted to cheer a bright spot in this bleak landscape: increasing use of stretch-release adhesives.
Once upon a time batteries were designed to be user-replaceable. But that required access mechanisms, electrical connectors, and protective shells around fragile battery cells. Eliminating such overhead allowed slimmer devices, but didn’t change the fact that the battery is still likely to need replacement. We thus entered into a dark age where battery pouches were glued into devices and replacement meant fighting clingy blobs and cleaning sticky residue. Something the teardown experts at iFixit are all too familiar with.
This is why they are happy to see pull tabs whenever they peer inside something, for those tabs signify the device was blessed with stretch-release adhesives. All we have to do is apply a firm and steady pull on those tabs to release their hold leaving no residue behind. We get an overview of how this magic works, with the caveat that implementation details are well into the land of patents and trade secrets.
But we do get tips on how to best remove them, and how to reapply new strips, which are important to iFixit’s mission. There’s also a detour into their impact on interior design of the device: the tabs have to be accessible, and they need room to stretch. This isn’t just a concern for design engineers, they also apply to stretch release adhesives sold to consumers. Advertising push by 3M Command and competitors have already begun, reminding people that stretch-release adhesive strips are ideal for temporary holiday decorations. They would also work well to hold batteries in our own projects, even if we aren’t their advertised targets.
Our end-of-year gift-giving traditions will mean a new wave of gadgets. And while not all of them will be easily repairable, we’re happy that this tiny bit of reparability exists. Every bit helps to stem the flow of electronics waste.
There comes a moment when our project sees the light of day, publicly presented to people who are curious to see the results of all our hard work, only for it to fail in a spectacularly embarrassing way. This is the dreaded “Demo Curse” and it recently befell the SIT Acronis Autonomous team. Their Roborace car gained social media infamy as it was seen launching off the starting line and immediately into a wall. A team member explained what happened.
A few explanations had started circulating, but only in the vague terms of a “steering lock” without much technical detail until this emerged. Steering lock? You mean like The Club? Well, sort of. While there was no steering wheel immobilization steel bar on the car, a software equivalent did take hold within the car’s systems. During initialization, while a human driver was at the controls, one of the modules sent out NaN (Not a Number) instead of a valid numeric value. This was never seen in testing, and it wreaked havoc at the worst possible time.
A module whose job was to ensure numbers stay within expected bounds said “not a number, not my problem!” That NaN value propagated through to the vehicle’s CAN data bus, which didn’t define the handling of NaN so it was arbitrarily translated into a very large number causing further problems. This cascade of events resulted in a steering control system locked to full right before the algorithm was given permission to start driving. It desperately tried to steer the car back on course, without effect, for the few short seconds until it met the wall.
While embarrassing and not the kind of publicity the Schaffhausen Institute of Technology or their sponsor Acronis was hoping for, the team dug through logs to understand what happened and taught their car to handle NaN properly. Driving a backup car, round two went very well and the team took second place. So they had a happy ending after all. Congratulations! We’re very happy this problem was found and fixed on a closed track and not on public roads.
Multirotor aircraft enjoy many intrinsic advantages, but as machines that fight gravity with brute force, energy efficiency is not considered among them. In the interest of stretching range, several air-ground hybrid designs have been explored. Flying cars, basically, to run on the ground when it isn’t strictly necessary to be airborne. But they all share the same challenge: components that make a car work well on the ground are range-sapping dead weight while in the air. [Youming Qin et al.] explored cutting that dead weight as much as possible and came up with Hybrid Aerial-Ground Locomotion with a Single Passive Wheel.
As the paper’s title made clear, they went full minimalist with this design. Gone are the driveshaft, brakes, steering, even other wheels. All that remained is a single unpowered wheel bolted to the bottom of their dual-rotor flying machine. Minimizing the impact on flight characteristics is great, but how would that work on the ground? As a tradeoff, these rotors have to keep spinning even while in “ground mode”. They are responsible for keeping the machine upright, and they also have to handle tasks like steering. These and other control algorithm problems had to be sorted out before evaluating whether such a compromised ground vehicle is worth the trouble.
Happily, the result is a resounding “yes”. Even though the rotors have to continue running to do different jobs while on the ground, that was still far less effort than hovering in the air. Power consumption measurements indicate savings of up to 77%, and there are a lot of potential venues for tuning still awaiting future exploration. Among them is to better understand interaction with ground effect, which is something we’ve seen enable novel designs. This isn’t exactly the flying car we were promised, but its development will still be interesting to watch among all the other neat ideas under development to keep multirotors in the air longer.
Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]
When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.
A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)
The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.
Many of us have become familiar with the distinctive sound of multirotor toys, a sound frequently punctuated by sharp sounds of crashes. We’d then have to pick it up and repair any damage before flying fun can resume. This is fine for a toy, but autonomous fliers will need to shake it off and get back to work without human intervention. [Zha et al.] of UC Berkeley’s HiPeRLab have invented a resilient design to do so.
We’ve seen increased durability from flexible frames, but that left the propellers largely exposed. Protective bumpers and cages are not new, either, but this icosahedron (twenty sided) tensegrity structure is far more durable than the norm. Tests verified it can survive impact with a concrete wall at speed of 6.5 meters per second. Tensegrity is a lot of fun to play with, letting us build intuition-defying structures and here tensegrity elements dissipate impact energy, preventing damage to fragile components like propellers and electronics.
But surviving an impact and falling to the ground in one piece is not enough. For independent operation, it needs to be able to get itself back in the air. Fortunately the brains of this quadcopter has been taught the geometry of an icosahedron. Starting from the face it landed on, it can autonomously devise a plan to flip itself upright by applying bursts of power to select propeller motors. Rotating itself face by face, working its way to an upright orientation for takeoff, at which point it is back in business.
We have a long way to go before autonomous drone robots can operate safely and reliably. Right now the easy answer is to fly slowly, but that also drastically cuts into efficiency and effectiveness. Having flying robots that are resilient against flying mistakes at speed, and can also recover from those mistakes, will be very useful in exploration of aerial autonomy.