If you want to blink a ton of WS2812-alike LED pixels over WiFi, the hardware side of things is easy enough: an LED strip, and ESP8266 unit, and a beefy enough power supply to feed them. But the software side — that’s where it can be a bit of a pain.
Enter Mc Lighting. It makes the software side of things idiot-proof. Flash the firmware onto the ESP8266, and you’ve got your choice of REST, WebSockets, or MQTT to get the data in. This means that it’ll work with Homekit, NodeRed, or an ESP-hosted web interface that you can pull up from any smartphone.
The web interface is particularly swell, and has a bunch of animations built in. (Check out the video below.) This means that you can solder some wires, flash an ESP, and your least computer-savvy relatives can be controlling the system in no time. And speaking of videos, Mc Lighting’s author [Tobias] has compiled a playlist of projects that use the library, also just below. The docs on GitHub are great, and also check out the wiki.
So what are you waiting for? Do you or your loved ones need some blink in your life? And while you’re ordering LED strips, get two. You’re going to want to build TWANG! as well.
Kniwwelino is the latest in a line of micro:bit-inspired projects that we’ve seen, but this one comes with a twist: it uses an ESP8266 and WiFi at the core instead of the nR51 ARM/BTLE chip. That means that students can connect via laptop, cellphone, or anything else that can get onto a network.
That’s not the only tradeoff, though. In order to get the price down, the Kniwwelino drops the accelerometer/magnetometer of the micro:bit for a programmable RGB LED. With fewer pins to break out, the Kniwwelino is able to ditch the love-it-or-hate-it card-edge connector of the micro:bit as well. In fact, with all these changes, it’s hard to call this a micro:bit clone at all — it’s more like a super-blinky ESP8266 development kit.
So what have they got left in common? The iconic 5×5 LED matrix in the center, and a Blockly visual programming dialect dedicated to the device. Based on the ESP8266, the Kniwwelino naturally also has an Arduino dialect that students can “graduate” to when they’re tired of moving around colored blobs, and of course you could flash the chip with anything else that runs on an ESP8266.
We don’t have one in our hands, but we like the idea. An RGB LED is a lot of fun on Day One, and the fact that the Kniwwelino fits so neatly into existing bodies of code makes the transition from novice to intermediate programmer a lot easier. These things are personal preference, but WiFi beats Bluetooth LE in our book, for sheer ubiquity and interoperability. Finally, the Kniwwelino comes in at about half the manufacturing cost of a micro:bit, which makes it viable in schools without large manufacturer subsidies. They’re estimating $5 per unit. (Retail is higher.) On the other hand, the Kniwwelino is going to use more juice than its ARM-based competitor, and doesn’t have an accelerometer.
Kniwwelino is apparently derived from a luxembourgish word “kniwweln” that apparently means to craft something. The German Calliope Mini is named after Zeus’ daughter, the programmer’s muse. We’re stoked to see so many cute dev boards getting into the hands of students, no matter what you call them.
Self-driving cars have been in the news a lot in the past two weeks. Uber’s self-driving taxi hit and killed a pedestrian on March 18, and just a few days later a Tesla running in “autopilot” mode slammed into a road barrier at full speed, killing the driver. In both cases, there was a human driver who was supposed to be watching over the shoulder of the machine, but in the Uber case the driver appears to have been distracted and in the Tesla case, the driver had hands off the steering wheel for six seconds prior to the crash. How safe are self-driving cars?
Trick question! Neither of these cars were “self-driving” in at least one sense: both had a person behind the wheel who was ultimately responsible for piloting the vehicle. The Uber and Tesla driving systems aren’t even comparable. The Uber taxi does routing and planning, knows the speed limit, and should be able to see red traffic lights and stop at them (more on this below!). The Tesla “Autopilot” system is really just the combination of adaptive cruise control and lane-holding subsystems, which isn’t even enough to get it classified as autonomous in the state of California. Indeed, it’s a failure of the people behind the wheels, and the failure to properly train those people, that make the pilot-and-self-driving-car combination more dangerous than a human driver alone would be.
A self-driving Uber Volvo XC90, San Francisco.
You could still imagine wanting to dig into the numbers for self-driving cars’ safety records, even though they’re heterogeneous and have people playing the mechanical turk. If you did, you’d be sorely disappointed. None of the manufacturers publish any of their data publicly when they don’t have to. Indeed, our glimpses into data on autonomous vehicles from these companies come from two sources: internal documents that get leaked to the press and carefully selected statistics from the firms’ PR departments. The state of California, which requires the most rigorous documentation of autonomous vehicles anywhere, is another source, but because Tesla’s car isn’t autonomous, and because Uber refused to admit that its car is autonomous to the California DMV, we have no extra insight into these two vehicle platforms.
Nonetheless, Tesla’s Autopilot has three fatalities now, and all have one thing in common — all three drivers trusted the lane-holding feature well enough to not take control of the wheel in the last few seconds of their lives. With Uber, there’s very little autonomous vehicle performance history, but there are leaked documents and a pattern that makes Uber look like a risk-taking scofflaw with sub-par technology that has a vested interest to make it look better than it is. That these vehicles are being let loose on public roads, without extra oversight and with other traffic participants as safety guinea pigs, is giving the self-driving car industry and ideal a black eye.
If Tesla’s and Uber’s car technologies are very dissimilar, the companies have something in common. They are both “disruptive” companies with mavericks at the helm that see their fates hinging on getting to a widespread deployment of self-driving technology. But what differentiates Uber and Tesla from Google and GM most is, ironically, their use of essentially untrained test pilots in their vehicles: Tesla’s in the form of consumers, and Uber’s in the form of taxi drivers with very little specific autonomous-vehicle training. What caused the Tesla and Uber accidents may have a lot more to do with human factors than self-driving technology per se.
You can see we’ve got a lot of ground to cover. Read on!
Scotty Allen has a YouTube blog called Strange Parts; maybe you’ve seen his super-popular video about building his own iPhone “from scratch”. It’s a great story, and it’s also a pretext for a slightly deeper dive into the electronics hardware manufacturing, assembly, and repair capital of the world: Shenzhen, China. After his talk at the 2017 Superconference, we got a chance to sit down with Scotty and ask about cellphones and his other travels. Check it out:
The Story of the Phone
Scotty was sitting around with friends, drinking in one of Shenzhen’s night markets, and talking about how bizarre some things seem to outsiders. There are people sitting on street corners, shucking cellphones like you’d shuck oysters, and harvesting the good parts inside. Electronics parts, new and used, don’t come from somewhere far away and there’s no mail-ordering. A ten-minute walk over to the markets will get you everything you need. The desire to explain some small part of this alternate reality to outsiders was what drove Scotty to dig into China’s cellphone ecosystem.
Every once in a while, you get your hands on a cool piece of hardware, and of course, it’s your first instinct to open it up and see how it works, right? Maybe see if it can be coaxed into doing just a little bit more than it says on the box? And so it was last Wednesday, when I was at the Embedded World trade fair, and stumbled on a cool touch display floating apparently in mid-air.
The display itself was a sort of focused Pepper’s Ghost illusion, reflected off of an expensive mirror made by Aska3D. I don’t know much more — I didn’t get to bring home one of the fancy glass plates — but it looked pretty good. But this display was interactive: you could touch the floating 2D projection as if it were actually there, and the software would respond. What was doing the touch response in mid-air? I’m a sucker for sensors, so I started asking questions and left with a small box of prototype Neonode zForce AIR sensor sticks to take apart.
The zForce sensors are essentially an array of IR lasers and photodiodes with some lenses that limit their field of view. The IR light hits your finger and bounces back to the photodiodes on the bar. Because the photodiodes have a limited angle over which they respond, they can be used to triangulate the distance of the finger above the display. Scanning quickly among the IR lasers and noting which photodiodes receive a reflection can locate a few fingertips in a 2D space, which explained the interactive part of the floating display. With one of these sensors, you can add a 2D touch surface to anything. It’s like an invisible laser harp that can also sense distance.
The intended purpose is fingertip detection, and that’s what the firmware is good at, but it must also be the case that it could detect the shape of arbitrary (concave) objects within its range, and that was going to be my hack. I got 90% of the way there in one night, thanks to affordable tools and free software that every hardware hacker should have in their toolbox. So read on for the unfortunate destruction of nice hardware, a tour through some useful command-line hardware-hacking tools, and gratuitous creation of animations from sniffed SPI-like data pulled off of some test points.
Hardware and software are certainly different beasts. Software is really just information, and the storing, modification, duplication, and transmission of information is essentially free. Hardware is expensive, or so we think, because it’s made out of physical stuff which is costly to ship or copy. So when we talk about open-source software (OSS) or open-source hardware (OSHW), we’re talking about different things — OSS is itself the end product, while OSHW is just the information to fabricate the end product, or have it fabricated.
The fabrication step makes OSHW essentially different from OSS, at least for now, but I think there’s something even more fundamentally different between the current state of OSHW and OSS: the pull request and the community. The success or failure of an OSS project depends on the community of people developing it, and for smaller projects that can hinge on the ease of a motivated individual digging in and contributing. This is the main virtue of OSS in my opinion: open-source software is most interesting when people are reading and writing that source.
With pure information, it’s essentially free to copy, modify, and push your changes upstream so that others can benefit. The open hardware world is just finding its feet in this respect, but that’s changing as we speak, and I have great hopes. Costs of fabrication are falling all around, open and useful tools are being actively developed to facilitate interchange of the design information. I think there are lessons that OSHW can learn from the OSS community’s pull-request culture, and that will help push the hardware hacker’s art forward.
What would it take to get you to build someone else’s OSHW project, improve on it, and contribute back? That’s a question worth a thoughtful deep dive.
What is it about mechanical clocks? Maybe it’s the gears, or the soft tick-tocking that they make? Or maybe it’s the pursuit of implausible mechanical perfection. Combine mechanical clocks with “free” energy harvested from daily temperature and pressure variation, and we’re hooked.
Both the Beverly Clock, built by Arthur Beverly in 1864, and the Atmos series of clocks built between 1929 and 1939, run exclusively on the expansion and contraction of a volume of air (Beverly) or ethyl chloride (Atmos) over the day to wind up the clock via a ratchet. The Beverly Clock was apparently a one-off, and it’s still running today. And with over 500,000 Atmos clocks produced, there must be some out there.
Although we had never heard of it, this basic idea is really old. Clicking through Wikipedia (like you do!) got us to Cox’s Timepiece, which is powered by the movement of 68 kg of mecury under atmospheric pressure. It is currently not running, but housed in the Victoria and Albert Museum in London. Even older is a clock that we couldn’t find any info on that dates from 1620, invented by Cornelius Drebbel. Anyone know anything?
We’ve had energy harvesting on our mind lately, and the article on the Beverly Clock says that it gets 31 μWh over a day when the temperature swings by 3.3 °C. Put into microcontroller perspective, this is 0.39 μA at 3.3 V, so you’ll have to be pretty careful about your sleep modes, and an LED is out of the question. How amazing is it, then, that this can power a mechanical clock?
Thanks [Luke], [hex4def6], and [Wallace Owen] for tipping us off to these in the comment section!