The Amazon Dash Button: A Retrospective

The Internet of Things will revolutionize everything! Manufacturing? Dog walking? Coffee bean refilling? Car driving? Food eating? Put a sensor in it! The marketing makes it pretty clear that there’s no part of our lives which isn’t enhanced with The Internet of Things. Why? Because with a simple sensor and a symphony of corporate hand waving about machine learning an iPhone-style revolution is just around the corner! Enter: Amazon Dash, circa 2014.

The first product in the Dash family was actually a barcode scanning wand which was freely given to Amazon Fresh customers and designed to hang in the kitchen or magnet to the fridge. When the Fresh customer ran out of milk they could scan the carton as it was being thrown away to add it to their cart for reorder. I suspect these devices were fairly expensive, and somewhat too complex to be as frequently used as Amazon wanted (thus the extremely limited launch). Amazon’s goal here was to allow potential customers to order with an absolute minimum of friction so they can buy as much as possible. Remember the “Buy now with 1-Click” button?

That original Dash Wand was eventually upgraded to include a push button activated Alexa (barcode scanner and fridge magnet intact) and is generally available. But Amazon had pinned its hopes on a new beau. Mid 2015 Amazon introduced the Dash Replenishment Service along with a product to be it’s exemplar – the Dash Button. The Dash Button was to be the 1-Click button of the physical world. The barcode-scanning Wands require the user to remember the Wand was nearby, find a barcode, scan it, then remember to go to their cart and order the product. Too many steps, too many places to get off Mr. Bezos’ Wild Ride of Commerce. The Dash Buttons were simple! Press the button, get the labeled product shipped to a preconfigured address. Each button was purchased (for $5, with a $5 coupon) with a particular brand affinity, then configured online to purchase a specific product when pressed. In the marketing materials, happy families put them on washing machines to buy Tide, or in a kitchen cabinet to buy paper towels. Pretty clever, it really is a Buy now with 1-Click button for the physical world.

There were two versions of the Dash button. Both have the same user interface and work in fundamentally the same way. They have a single button (the software can recognize a few click patterns), a single RGB LED (‘natch), and a microphone (no, it didn’t listen to you, but we’ll come back to this). They also had a WiFi radio. Version two (silently released in 2016) added Bluetooth and completely changed the electrical innards, though to no user facing effect.

In February 2019, Amazon stopped selling the Dash Buttons. Continue reading “The Amazon Dash Button: A Retrospective”

3D Printed Rover Enjoys Long Walks On The Beach

More than a few hackers have put in the considerable time and effort required to build a rover inspired by NASA’s robotic Martian explorers, but unfortunately even the most well funded home tinkerer can’t afford the ticket to send their creation offworld. So most of these builds don’t journey through anything more exciting than a backyard sandbox. Not that we can blame their creators, we think a homebrew rover will look just as cool in your living room as it would traipsing through a rock quarry.

But the DIY rover status quo clearly wasn’t sufficient for [Jakob Krantz], who decided the best way to test his new Curiosity-inspired rover was to let it frolic around on the beach for an afternoon. But judging by the video after the break, his beefy 3D printed bot proved to be more than up to the task; powering through wildly uneven terrain with little difficulty.

Beyond a few “real” bearings here and there, all of the key components for the rover are 3D printed. [Jakob] did borrow a couple existing designs, like a printable bearing he found on Thingiverse, but for the most part he’s been toiling away at the design in Fusion 360 and using images of the real Curiosity rover as his guide.

Right now, he’s controlling the rover with a standard 6 channel RC receiver. Four channels are mapped to the steering servos, and a fifth to the single electronic speed control that commands the six wheel motors. But he’s recently added an Arduino to the rover which will eventually be in charge of interpreting the RC commands. This will allow more complex maneuvers with fewer channels, such as the ability to rotate in place.

We’re proud to count our very own [Roger Cheng] among the rover wrangling hackers of the world. An entire community has sprung up around his six-wheeled Sawppy, and the knowledge gained during its design and construction could be applicable to any number of other projects.

Continue reading “3D Printed Rover Enjoys Long Walks On The Beach”

Candy-Colored Synth Sounds Sweet

Let’s face it, synthesizers are awesome. But commercial synths are pretty expensive. Even the little toy ones like the KORG Volca and the MicroKORG will run you a few hundred bucks. For the most part, they’re worth the price because they’re packed with features. This is great for experienced synth wizards, but can be intimidating to those who just want to make some bleeps and bloops.

[Kenneth] caught the mini-synth bug, but can’t afford to catch ’em all. After a visit to the Moog factory, he was inspired to engineer his own box based on the Moog Sirin. The result is KELPIE, an extremely portable and capable synth with 12 voices, 16 knobs, and 4 LED buttons. KELPIE is plug and play—power and a MIDI device, like a keyboard, are the only requirements. It has both 1/8″ and 1/4″ jacks in addition to a standard MIDI DIN connection. [Kenneth] rolled his own board based on the Teensy 3.2 chip and the Teensy audio shield.

Part of the reason Kenneth built this synthesizer is to practice designing a product from the ground up. Throughout the process, he has tried to keep both the production line and the DIYer in mind: the prototype is a two-part resin print, but the design could also be injection molded.

We love that KELPIE takes its visual design cues from the translucent candy-colored Game Boys of the late 90s. (We had the purple one, but always lusted after the see-through kind.)  Can we talk about those knobs? Those are resin-printed, too. To color the indicators, [Kenneth] used the crayon technique, which amounts to dripping molten crayon into the groove and scraping it off once hardened. Don’t delay; glide past the break to watch a demo.

Continue reading “Candy-Colored Synth Sounds Sweet”

The Tens Of Millions Of Faces Training Facial Recognition; You’ll Soon Be Able To Search For Yourself

In a stiflingly hot lecture tent at CCCamp on Friday, Adam Harvey took to the stage to discuss the huge data sets being used by groups around the world to train facial recognition software. These faces come from a variety of sources and soon Adam and his research collaborator Jules LaPlace will release a tool that makes these dataset searchable allowing you to figure out if your face is among the horde.

Facial recognition is the new hotness, recently bubbling up to the consciousness of the general public. In fact, when boarding a flight from Detroit to Amsterdam earlier this week I was required to board the plane not by showing a passport or boarding pass, but by pausing in front of a facial recognition camera which subsequently printed out a piece of paper with my name and seat number on it (although it appears I could have opted out, that was not disclosed by Delta Airlines staff the time). Anecdotally this gives passengers the feeling that facial recognition is robust and mature, but Adam mentions that this not the case and that removed from highly controlled environments the accuracy of recognition is closer to an abysmal 2%.

Images are only effective in these datasets when the interocular distance (the distance between the pupils of your eyes) is a minimum of 40 pixels. But over the years this minimum resolution has been moving higher and higher, with the current standard trending toward 300 pixels. The increase is not surprising as it follows a similar curve to the resolution available from digital cameras. The number of faces available in data sets has also increased along a similar curve over the years.

Adam’s talk recounted the availability of face and person recognition datasets and it was a wild ride. Of note are data sets by the names of Brainwash Cafe, Duke MTMC (multi-tracking-multi-camera),  Microsoft Celeb, Oxford Town Centre, and the Unconstrained College Students data set. Faces in these databases were harvested without consent and that has led to four of them being removed, but of course, they’re still available as what is once on the Internet may never die.

The Microsoft Celeb set is particularly egregious as it used the Bing search engine to harvest faces (oh my!) and has associated names with them. Lest you think you’re not a celeb and therefore safe, in this case celeb means anyone who has an internet presence. That’s about 10 million faces. Adam used two examples of past CCCamp talk videos that were used as a source for adding the speakers’ faces to the dataset. It’s possible that this is in violation of GDPR so we can expect to see legal action in the not too distant future.

Your face might be in a dataset, so what? In their research, Adam and Jules tracked geographic locations and other data to establish who has downloaded and is likely using these sets to train facial recognition AI. It’s no surprise that the National University of Defense Technology in China is among the downloaders. In the case of US intelligence organizations, it’s easier much easier to know they’re using some of the sets because they funded some of the research through organizations like the IARPA. These sets are being used to train up military-grade face recognition.

What are we to do about this? Unfortunately what’s done is done, but we do have options moving forward. Be careful of how you license images you upload — substantial data was harvested through loopholes in licenses on platforms like Flickr, or by agreeing to use through EULAs on platforms like Facebook. Adam’s advice is to stop populating the internet with faces, which is why I’ve covered his with the Jolly Wrencher above. Alternatively, you can limit image resolution so interocular distance is below the forty-pixel threshold. He also advocates for changes to Creative Commons that let you choose to grant or withhold use of your images in train sets like these.

Adam’s talk, MegaPixels: Face Recognition Training Datasets, will be available to view online by the time this article is published.

When Toothbrushes, Typewriters, And Credit Card Machines Form A Band

Many everyday objects make some noise as a side effect of their day job, so some of us would hack them into music instruments that can play a song or two. It’s fun, but it’s been done. YouTube channel [Device Orchestra] goes far beyond a device buzzing out a tune – they are full fledged singing (and dancing!) performers. Watch their cover of Take on Me embedded after the break, and if you liked it head over to the channel for more.

The buzz of a stepper motor, easily commanded for varying speeds, is the easiest entry point into this world of mechanical music. They used to be quite common in computer equipment such as floppy drives, hard drives, and flatbed scanners. As those pieces of equipment become outdated and sold for cheap, it became feasible to assemble a large number of them with the Floppotron being something of a high-water mark.

After one of our more recent mentions in this area, when the mechanical sound of a floppy drive is used in the score of a motion picture, there were definite signs of fatigue in the feedback. “We’re ready for something new” so here we are without any computer peripherals! [Device Orchestra] features percussion by typewriters, vocals by toothbrushes, and choreography by credit card machines with the help of kitchen utensils. Coordinating them all is an impressive pile of wires acting as stage manager.

We love to see creativity with affordable everyday objects like this. But we also see the same concept done with equipment on the opposite end of the price spectrum such as a soothing performance of Bach using the coils of a MRI machine.

[Thanks @Bornach1 for the tip]

Continue reading “When Toothbrushes, Typewriters, And Credit Card Machines Form A Band”

Looking Around Corners With F-K Migration

The concept behind non-line-of-sight (NLOS) imaging seems fairly easy to grasp: a laser bounces photons off a surface that illuminate objects that are within in sight of that surface, but not of the imaging equipment. The photons that are then reflected or refracted by the hidden object make their way back to the laser’s location, where they are captured and processed to form an image. Essentially this allows one to use any surface as a mirror to look around corners.

Main disadvantage with this method has been the low resolution and high susceptibility to noise. This led a team at Stanford University to experiment with ways to improve this. As detailed in an interview by Tech Briefs with graduate student [David Lindell], a major improvement came from an ultra-fast shutter solution that blocks out most of the photons that return from the wall that is being illuminated, preventing the photons reflected by the object from getting drowned out by this noise.

The key to getting the imaging quality desired, including with glossy and otherwise hard to image objects, was this f-k migration algorithm. As explained in the video that is embedded after the break, they took a look at what methods are used in the field of seismology, where vibrations are used to image what is inside the Earth’s crust, as well as synthetic aperture radar and similar. The resulting algorithm uses a sequence of Fourier transformation, spectrum resampling and interpolation, and the inverse Fourier transform to process the received data into a usable image.

This is not a new topic; we covered a simple implementation of this all the way back in 2011, as well as a project by UK researchers in 2015. This new research shows obvious improvements, making this kind of technology ever more viable for practical applications.

Continue reading “Looking Around Corners With F-K Migration”

Rocket Lab Sets Their Sights On Rapid Reusability

Not so very long ago, orbital rockets simply didn’t get reused. After their propellants were expended on the journey to orbit, they petered out and fell back down into the ocean where they were obliterated on impact. Rockets were disposable because, as far as anyone could tell, building another one was cheaper and easier than trying to reuse them. The Space Shuttle had proved that reuse of a spacecraft and its booster was possible, but the promised benefits of reduced cost and higher launch cadence never materialized. If anything, the Space Shuttle was often considered proof that reusability made more sense on paper than it did in the real-world.

Rocket Lab CEO Peter Beck with Electron rocket

But that was before SpaceX started routinely landing and reflying the first stage of their Falcon 9 booster. Nobody outside the company really knows how much money is being saved by reuse, but there’s no denying the turn-around time from landing to reflight is getting progressively shorter. Moreover, by performing up to three flights on the same booster, SpaceX is demonstrating a launch cadence that is simply unmatched in the industry.

So it should come as no surprise to find that other launch providers are feeling the pressure to develop their own reusability programs. The latest to announce their intent to recover and eventually refly their vehicle is Rocket Lab, despite CEO Peter Beck’s admission that he was originally against the idea. He’s certainly changed his tune. With data collected over the last several flights the company now believes they have a reusability plan that’s compatible with the unique limitations of their diminutive Electron launch vehicle.

According to Beck, the goal isn’t necessarily to save money. During his presentation at the Small Satellite Conference in Utah, he explained that what they’re really going after is an increase in flight frequency. Right now they can build and fly an Electron every month, and while they eventually hope to produce a rocket a week, even a single reuse per core would have a huge impact on their annual launch capability:

If we can get these systems up on orbit quickly and reliably and frequently, we can innovate a lot more and create a lot more opportunities. So launch frequency is really the main driver for why Electron is going reusable. In time, hopefully we can obviously reduce prices as well. But the fundamental reason we’re doing this is launch frequency. Even if I can get the stage back once, I’ve effectively doubled my production ratio.

But, there’s a catch. Electron is too small to support the addition of landing legs and doesn’t have the excess propellants to use its engines during descent. Put simply, the tiny rocket is incapable of landing itself. So Rocket Lab believes the only way to recover the Electron is by snatching it out of the air before it gets to the ground.

Continue reading “Rocket Lab Sets Their Sights On Rapid Reusability”