Robotic Skin Sees When (and How) You’re Touching It

Cameras are getting less and less conspicuous. Now they’re hiding under the skin of robots.

A team of researchers from ETH Zurich in Switzerland have recently created a multi-camera optical tactile sensor that is able to monitor the space around it based on contact force distribution. The sensor uses a stack up involving a camera, LEDs, and three layers of silicone to optically detect any disturbance of the skin.

The scheme is modular and in this example uses four cameras but can be scaled up from there. During manufacture, the camera and LED circuit boards are placed and a layer of firm silicone is poured to about 5 mm in thickness. Next a 2 mm layer doped with spherical particles is poured before the final 1.5 mm layer of black silicone is poured. The cameras track the particles as they move and use the information to infer the deformation of the material and the force applied to it. The sensor is also able to reconstruct the forces causing the deformation and create a contact force distribution. The demo uses fairly inexpensive cameras — Raspberry Pi cameras monitored by an NVIDIA Jetson Nano Developer Kit — that in total provide about 65,000 pixels of resolution.

Apart from just providing more information about the forces applied to a surface, the sensor also has a larger contact surface and is thinner than other camera-based systems since it doesn’t require the use of reflective components. It regularly recalibrates itself based on a convolutional neural network pre-trained with data from three cameras and updated with data from all four cameras. Possible future applications include soft robotics, improving touch-based sensing with the aid of computer vision algorithms.

While self-aware robotic skins may not be on the market quite so soon, this certainly opens the possibility for robots that can detect when too much force is being applied to their structures — the machine equivalent sensation to pain.

Continue reading “Robotic Skin Sees When (and How) You’re Touching It”

Incredibly Tiny RF Antennas For Practical Nanotech Radios

Researchers may have created the smallest-ever radio-frequency antennas, a development that should be of interest to any nanotechnology enthusiasts. A group of scientists from Korea published a paper in ACS Nano that details the fabrication of a two-dimensional radio-frequency antenna for wearable applications. Most antennas made from metallic materials like aluminum, cooper, or steel which are too thick to use for nanotechnology applications, even in the wearables space. The newly created antenna instead uses metallic niobium diselenide (NbSe2) to create a monopole patch RF antenna. Even with its sub-micrometer thickness (less than 1/100 the width of a strand of human hair), it functions effectively.

The metallic niobium atoms are sandwiched between two layers of selenium atoms to create the incredibly thin 2D composition. This was accomplished by spray-coating layers of the NbSe2 nanosheets onto a plastic substrate. A 10 mm x 10 mm patch of the material was able to perform with a 70.6% radiation efficiency, propagating RF signals in all directions. Changing the length of the antenna allowed its frequency to be tuned from 2.01-2.80 GHz, which includes the range required for Bluetooth and WiFi connectivity.

Within the ever-shrinking realm of sensors for wearable technologies, there is sure to be a place for tiny antennas as well.

[Thanks Qes for the tip!]

Qantas’ Research Flight Travels 115% Of Range With Undercrowded Cabin

Long-haul flights can be a real pain when you’re trying to get around the world. Typically, they’re achieved by including a stop along the way, with the layover forcing passengers to deplane and kill time before continuing the flight. As planes have improved over the years, airlines have begun to introduce more direct flights where possible, negating this frustration.

Australian flag carrier Qantas are at the forefront of this push, recently attempting a direct flight from New York to Sydney. This required careful planning and preparation, and the research flight is intended to be a trial run ahead of future commercial operations. How did they keep the plane — and the passengers — in the air for this extremely long haul? The short answer is that they cheated with no cargo and by pampering their 85% empty passenger cabin. Yet they plan to leverage what they learn to begin operating 10,000+ mile non-stop passenger flights — besting the current record by 10% — as soon as four years from now.
Continue reading “Qantas’ Research Flight Travels 115% Of Range With Undercrowded Cabin”

An Algorithm For De-Biasing AI Systems

A fundamental truth about AI systems is that training the system with biased data creates biased results. This can be especially dangerous when the systems are being used to predict crime or select sentences for criminals, since they can hinge on unrelated traits such as race or gender to make determinations.

A group of researchers from the Massachusetts Institute of Technology (MIT) CSAIL is working on a solution to “de-bias” data by resampling it to be more balanced. The paper published by PhD students [Alexander Amini] and [Ava Soleimany] describes an algorithm that can learn a specific task – such as facial recognition – as well as the structure of the training data, which allows it to identify and minimize any hidden biases.

Testing showed that the algorithm minimized “categorical bias” by over 60% compared against other widely cited facial detection models, all while maintaining the same precision of detection. This figure was maintained when the team evaluated a facial-image dataset from the Algorithmic Justice League, a spin-off group from the MIT Media Lab.

The team says that their algorithm would be particularly relevant for large datasets that can’t easily be vetted by a human, and can potentially rectify algorithms used in security, law enforcement, and other domains beyond facial detection.

This Biofuel Cell Harvests Energy From Your Sweat

Researchers from l’Université Grenoble Alpes and the University of San Diego recently developed and patented a flexible device that’s able to produce electrical energy from human sweat. The lactate/O2 biofuel cell has been demonstrated to light an LED, leading to further development in the area of harvesting energy through wearables.

[via Advanced Functional Materials]
The research was published in Advanced Functional Materials on September 25, 2019. The potential use cases for this type of biofuel cell within the wearables space include medical and athletic monitoring. By using biofuels present in human fluids, the devices can rely on an efficient energy source that easily integrated with the human body.

Scientists have developed a flexible conductive material made up of carbon nanotubes, cross-linked polymers, and enzymes connected to each and printed through screen-printing. This type of composite is known as a buckypaper, and uses the carbon nanotubes as the electrode material.

The lactate oxidase works as the anode and the bilirubin oxidase (from the yellowish compound found in blood) as the cathode. Given the theoretical high power density of lactate, this technology has the potential to produce even more power than its current power generation of 450 µW.

[via Advanced Functional Materials]
The cell follows deformations in the skin and produces electrical energy through oxygen reduction and oxidation of the lactate in perspiration. A boost converter is used to increase the voltage to continuously power an LED. The biofuel cells currently delivered 0.74V of open circuit voltage. As measurements for power generation had to be taken with the biofuel cell against human skin, the device has shown to be productive even when stretched and compressed.

At the moment, the biggest cost for production is the price of the enzymes that transform the compounds in sweat. Beyond cost considerations, the researchers also need to look at ways to increase the voltage in order to power larger portable devices.

With all the exciting research surrounding wearable technology right now, hopefully we’ll be hearing about further developments and applications from this research group soon!

[Thanks to Qes for the tip!]

Acoustic Lenses Show Sound Can Be Focused Like Light

Acoustic lenses are remarkable devices that just got cooler. A recent presentation at SIGGRAPH 2019 showed that with the help of 3D printing, it is possible to build the acoustic equivalent of optical devices. That is to say, configurations that redirect or focus sound waves. One fascinating demonstration worked like an acoustic prism, able to send different notes from a simple melody in different directions. Another was a device that dynamically varied the distance between two lenses in order to focus sound onto a moving target. In both cases, the sounds originate from an ordinary speaker and are shaped by passing through the acoustic lens or lenses, which are entirely passive devices.

Researchers from the University of Sussex used 3D printing for a modular approach to acoustic lens design. 16 different pre-printed “bricks” (shown here) can be assembled in various combinations to get different results. There are limitations, however. The demonstration lenses only work in a narrow bandwidth, meaning that the sound they work with is limited to about an octave at best. That’s enough for a simple melody, but not nearly enough to cover a human’s full audible range. Download the PDF for a quick read about the details, it’s only two pages but loaded with enough to whet your appetite to know more.

Directional sound can be done in other ways as well, such as using an array of ultrasonic emitters to create a coherent beam of sound. Ultrasonic emitters can even levitate lightweight objects. Ain’t sound neat?

Yo Dawg, I Heard You Like FPGAs

When the only tool you have is a hammer, all problems look like nails. And if your goal is to emulate the behavior of an FPGA but your only tools are FPGAs, then your nail-and-hammer issue starts getting a little bit interesting. That’s at least what a group of students at Cornell recently found when learning about the Xilinx FPGA used by a researcher in the 1990s by programming its functionality into another FPGA.

Using outdated hardware to recreate a technical paper from decades ago might be possible, but an easier solution was simply to emulate the Xilinx in a more modern FPGA, the Cyclone V FPGA from Terasic. This allows much easier manipulation of I/O as well as reducing the hassle required to reprogram the device. Once all of that was set up, it was much simpler to perform the desired task originally set up in that 90s paper: using evolutionary algorithms to discriminate between different inputs.

While we will leave the investigation into the algorithms and the I/O used in this project as an academic exercise for the reader, this does serve as a good reminder that we don’t always have to have the exact hardware on hand to get the job done. Old computers can be duplicated on less expensive, more modern equipment, and of course video games from days of yore are a snap to play on other hardware now too.

Thanks to [Bruce Land] for the tip!