Teardown Experts Sing Praise Of Stretch-Release Adhesives

Anyone who enjoys opening up consumer electronics knows iFixit to be a valuable resource, full of reference pictures and repair procedures to help revive devices and keep them out of electronic waste. Champions of reparability, they’ve been watching in dismay as the quest for thinner and lighter devices also made them harder to fix. But they wanted to cheer a bright spot in this bleak landscape: increasing use of stretch-release adhesives.

Nokia BL-50J Battery
An elegant battery, for a more civilized age.

Once upon a time batteries were designed to be user-replaceable. But that required access mechanisms, electrical connectors, and protective shells around fragile battery cells. Eliminating such overhead allowed slimmer devices, but didn’t change the fact that the battery is still likely to need replacement. We thus entered into a dark age where battery pouches were glued into devices and replacement meant fighting clingy blobs and cleaning sticky residue. Something the teardown experts at iFixit are all too familiar with.

This is why they are happy to see pull tabs whenever they peer inside something, for those tabs signify the device was blessed with stretch-release adhesives. All we have to do is apply a firm and steady pull on those tabs to release their hold leaving no residue behind. We get an overview of how this magic works, with the caveat that implementation details are well into the land of patents and trade secrets.

But we do get tips on how to best remove them, and how to reapply new strips, which are important to iFixit’s mission. There’s also a detour into their impact on interior design of the device: the tabs have to be accessible, and they need room to stretch. This isn’t just a concern for design engineers, they also apply to stretch release adhesives sold to consumers. Advertising push by 3M Command and competitors have already begun, reminding people that stretch-release adhesive strips are ideal for temporary holiday decorations. They would also work well to hold batteries in our own projects, even if we aren’t their advertised targets.

Our end-of-year gift-giving traditions will mean a new wave of gadgets. And while not all of them will be easily repairable, we’re happy that this tiny bit of reparability exists. Every bit helps to stem the flow of electronics waste.

“Hey, You Left The Peanut Out Of My Peanut M&Ms!”

Candy-sorting robots are in plentiful supplies on these pages, and with good reason — they’re a great test of the complete suite of hacker tools, from electronics to machine vision to mechatronics. So we see lots of sorters for Skittles, jelly beans, and occasionally even Reese’s Pieces, but it always seems that the M&M sorters are the most popular.

This M&M sorter has a twist, though — it finds the elusive and coveted peanutless candies lurking in most bags of Peanut M&Ms. To be honest, we’d never run into this manufacturing defect before; being chiefly devoted to the plain old original M&Ms, perhaps our sample size has just been too small. Regardless, [Harrison McIntyre] knows they’re there and wants them all to himself, hence his impressive build.

To detect the squib confections, he built a tiny 3D-scanner from a line laser, a turntable, and a Raspberry Pi camera. After scanning the surface to yields its volume, a servo sweeps the candy onto a scale, allowing the density to be calculated. Peanut-free candies will be somewhat denser than their leguminous counterparts, allowing another servo to move the candy to the proper exit chute. The video below shows you all the details, and more than you ever wanted to know about the population statistics of Peanut M&Ms.

We think this is pretty slick, and a nice departure from the sorters that primarily rely on color to sort candies. Of course, we still love those too — take your pick of quick and easy, compact and sleek, or a model of industrial design.

Continue reading ““Hey, You Left The Peanut Out Of My Peanut M&Ms!””

Hackaday Links Column Banner

Hackaday Links: November 8, 2020

Saturday, November 7, 2020 – NOT PASADENA. Remoticon, the virtual version of the annual Hackaday Superconference forced upon us by 2020, the year that keeps on giving, is in full swing. As I write this, Kipp Bradford is giving one of the two keynote addresses, and last night was the Bring a Hack virtual session, which I was unable to attend but seems to have been very popular, at least from the response to it. In about an hour, I’m going to participate in the SMD Soldering Challenge on the Hackaday writing crew team, and later on, I’ll be emceeing a couple of workshops. And I’ll be doing all of it while sitting in my workshop/office here in North Idaho.

Would I rather be in Pasadena? Yeah, probably — last year, Supercon was a great experience, and it would have been fun to get together again and see everyone. But here we are, and I think we’ve all got to tip our hacker hats to the Remoticon organizers, for figuring out how to translate the in-person conference experience to the virtual space as well as they have.

The impact of going to a museum and standing in the presence of a piece of art or a historic artifact is hard to overstate. I once went to an exhibit of artifacts from Pompeii, and was absolutely floored to gaze upon a 2,000-year-old loaf of bread that was preserved by the volcanic eruption of 79 AD. But not everyone can get to see such treasures, which is why Scan the World was started. The project aims to collect 3D scans of all kinds of art and artifacts so that people can potentially print them for study. Their collection is huge and seems to concentrate on classic sculptures — Michelangelo’s David is there, as are the Venus de Milo, the Pieta, and Rodin’s Thinker. But there are examples from architecture, anatomy, and history. The collection seems worth browsing through and worth contributing to if you’re so inclined.

For all the turmoil COVID-19 has caused, it has opened up some interesting educational opportunities that probably wouldn’t ever have been available in the Before Time. One such opportunity is an undergraduate-level course in radio communications being offered on the SDRPlay YouTube channel. The content was created in partnership with the Sapienza University of Rome. It’s not entirely clear who this course is open to, but the course was originally designed for third-year undergrads, and the SDRPlay Educators Program is open to anyone in academia, so we’d imagine you’d need some kind of academic affiliation to qualify. The best bet might be to check out the intro video on the SDRPlay Educator channel and plan to attend the webinar scheduled for November 19 at 1300 UTC. You could also plan to drop into the Learning SDR and DSP Hack Chat on Wednesday at noon Pacific, too — that’s open to everyone, just like every Hack Chat is.

And finally, as if bald men didn’t suffer enough disrespect already, now artificial intelligence is having a go at them. At a recent soccer match in Scotland, an AI-powered automatic camera system consistently interpreted an official’s glabrous pate as the soccer ball. The system is supposed to keep the camera trained on the action by recognizing the ball as it’s being moved around the field. Sadly, the linesman in this game drew the attention of the system quite frequently, causing viewers to miss some of the real action. Not that what officials do during sporting events isn’t important, of course, but it’s generally not what viewers want to see. The company, an outfit called Pixellot, knows about the problem and is working on a solution. Here’s hoping the same problem doesn’t crop up on American football.

Fail Of The Week: Roboracer Meets Wall

There comes a moment when our project sees the light of day, publicly presented to people who are curious to see the results of all our hard work, only for it to fail in a spectacularly embarrassing way. This is the dreaded “Demo Curse” and it recently befell the SIT Acronis Autonomous team. Their Roborace car gained social media infamy as it was seen launching off the starting line and immediately into a wall. A team member explained what happened.

A few explanations had started circulating, but only in the vague terms of a “steering lock” without much technical detail until this emerged. Steering lock? You mean like The Club? Well, sort of. While there was no steering wheel immobilization steel bar on the car, a software equivalent did take hold within the car’s systems.  During initialization, while a human driver was at the controls, one of the modules sent out NaN (Not a Number) instead of a valid numeric value. This was never seen in testing, and it wreaked havoc at the worst possible time.

A module whose job was to ensure numbers stay within expected bounds said “not a number, not my problem!” That NaN value propagated through to the vehicle’s CAN data bus, which didn’t define the handling of NaN so it was arbitrarily translated into a very large number causing further problems. This cascade of events resulted in a steering control system locked to full right before the algorithm was given permission to start driving. It desperately tried to steer the car back on course, without effect, for the few short seconds until it met the wall.

While embarrassing and not the kind of publicity the Schaffhausen Institute of Technology or their sponsor Acronis was hoping for, the team dug through logs to understand what happened and taught their car to handle NaN properly. Driving a backup car, round two went very well and the team took second place. So they had a happy ending after all. Congratulations! We’re very happy this problem was found and fixed on a closed track and not on public roads.

[via Engadget]

Today At Remoticon: Sunday Live Events

Hackaday Remoticon is a worldwide virtual conference happening now!

Public Livestreams (all times are PST, UTC-8):

Hackaday YouTube and Facebook Live:

  • 12:00pm SDR Workshop
  • 2:15pm Hacker’s Guide to Hardware Debugging Workshop

Hackaday Twitch:

  • 11am SMD Challenge: Remoticon Attendees (Heat 2)
  • 1pm SMD Challenge: LayerOne Badge Team
  • 3pm SMD Challenge: Remoticon Attendees (Heat 3)

Hackaday Twitch Two:

  • 12:00pm Design Methodology Workshop
  • 12:45pm 0 to ASIC Workshop
  • 2:45pm IC Reverse Engineering
  • 3:30pm How to Create Guides People Will Actually Use

Under The Sea GPS Uses Sound

If you’ve ever tried to use GPS indoors, you know that the signals aren’t easy to acquire in any sort of structure. Now imagine trying to get a satellite fix underwater. Researchers at MIT have a new technique, underwater backscatter localization or UBL, that promises to provide a low-power localization system tailored for the subsea environment.

Like other existing solutions, UBL uses sound waves, but it avoids some of the common problems with using sonic beacons in that environment. A typical system has a fixed beacon constrained by the availability of power or battery-operated beacons that require replacement or recharging. Since the beacon acts as a transponder — it receives a signal and then replies — it requires either constant power or time to wake up from the external stimulus and that time typically varies with the environment. That variable startup time interferes with computing the round-trip time of the signal, which is crucial for estimating position.

Continue reading “Under The Sea GPS Uses Sound”

Robots Can Finally Answer, Are You Talking To Me?

Voice Assistants, love them, or hate them, are becoming more and more commonplace. One problem for voice assistants is the situation of multiple devices listening in the same place. When a command is given, which device should answer? Researchers at CMU’s Future Interfaces Group [Karan Ahuja], [Andy Kong], [Mayank Goel], and [Chris Harrison] have an answer; smart assistants should try to infer if the user is facing the device they want to talk to. They call it direction-of-voice or DoV.

Currently, smart assistants use a simple race to see who heard it first. The reasoning is that the device you are closest to will likely hear it first. However, in situations with echos or when you’re equidistant from multiple devices, the outcome can seem arbitrary to a user.

The implementation of DoV uses an Extra-Trees Classifier from the python sklearn toolkit. Several other machine learning algorithms were considered, but ultimately efficiency won out and Extra-Trees was selected. Another interesting facet of the research was determining what facing really means. The team had humans ‘listeners’ stand in for smart assistants.  A ‘talker’ would speak the key phrase while the ‘listener’ determined if the talker was facing them or not. Based on their definition of facing, the system can determine if someone is facing the device with 90% accuracy that rises to 93% with per-room calibration.

Their algorithm as well as the data they collected has been open-sourced on GitHub. Perhaps when you’re building your own voice assistant, you can incorporate DoV to improve wake-word accuracy.

Continue reading “Robots Can Finally Answer, Are You Talking To Me?”