There’s a newish development in the world of keyboards; the optical switch. It’s been around for a couple years in desktop keyboards, and recently became available on a laptop keyboard as well. These are not replacements for your standard $7 keyboard with rubber membrane switches intended for puttering around on your raspberry pi. Their goal is the gamer market.
The question, though, is are these the equivalent of Monster Cables for audiophiles: overpriced status symbols? Betteridge would be proud; the short answer is that no, there is a legitimate advantage, and for certain types of use, it makes a lot of sense.
If you count yourself among the several hundred of our closest friends that have joined us at Supplyframe HQ for the 2019 Hackaday Superconference, then by now you’ll have your hands on one of this year’s incredible FPGA badges. It should come as no surprise that an incredible amount of time and effort went into developing and manufacturing this exceptionally unique piece of hardware; the slick gadget in your hands today took nearly an entire year to develop, and work continued on it until very literally the last possible moment.
Badge designer Jeroen Domburg (aka Sprite_TM), Hackaday staff, and a team of dedicated volunteers were still putting the final touches on these ambitious devices less than 24 hours before they were distributed to the first wave of Superconference attendees. Naturally, that’s not exactly how things were supposed to go. But when you’ve got a group of people that want to push the envelope and build something truly incredible, convincing them to actually stop working can be a challenge in itself.
In fact, development of the badge is still ongoing. Fixes and improvements are being made to the software even as you read this, and if you haven’t already, you should upgrade your badge to make sure you’ve got the latest and greatest from our international team of wizards. We all know that conference badges have an unfortunate habit of languishing on the shelf and collecting dust, but the 2019 Superconference badge was built to challenge you for longer than just one weekend. Consider yourself warned: for every Supercon badge that gets tossed in a drawer come Monday, Sprite_TM will shed a single tear.
After the break, come along as we turn back the clock and take a look at the last minute dash to get 500+ badges programmed and ready to go before the doors opened for the 2019 Hackaday Superconference.
If you came here from an internet search because your battery just blew up and you don’t know how to put out the fire, then use a regular fire extinguisher if it’s plugged in to an outlet, or a fire extinguisher or water if it is not plugged in. Get out if there is a lot of smoke. For everyone else, keep reading.
I recently developed a product that used three 18650 cells. This battery pack had its own overvoltage, undervoltage, and overcurrent protection circuitry. On top of that my design incorporated a PTC fuse, and on top of that I had a current sensing circuit monitored by the microcontroller that controlled the board. When it comes to Li-Ion batteries, you don’t want to mess around. They pack a lot of energy, and if something goes wrong, they can experience thermal runaway, which is another word for blowing up and spreading fire and toxic gasses all over. So how do you take care of them, and what do you do when things go poorly?
For all the successes of modern weather forecasting, where hurricanes, blizzards, and even notoriously unpredictable tornadoes are routinely detected before they strike, reliably predicting one aspect of nature’s fury has eluded us: earthquakes. The development of plate tectonic theory in the middle of the 20th century and the construction of a worldwide network of seismic sensors gave geologists the tools to understand how earthquakes happened, and even provided the tantalizing possibility of an accurate predictor of a coming quake. Such efforts had only limited success, though, and enough false alarms that most efforts to predict earthquakes were abandoned by the late 1990s or so.
It may turn out that scientists were looking in the wrong place for a reliable predictor of coming earthquakes. Some geologists and geophysicists have become convinced that instead of watching the twitches and spasms of the earth, the state of the skies above might be more fruitful. And they’re using the propagation of radio waves from both space and the ground to prove their point that the ionosphere does some interesting things before and after an earthquake strikes.
We’ve talked about PXE booting the Raspberry Pi 3B+, and then looked at the Raspberry Pi 4 as a desktop replacement. But there’s more! The Pi 4 sports a very useful new feature, the flashable bootloader. Just recently a beta version of that bootloader was released that supports PXE — booting up over the network — which has become a must-have for those of us who have had consistently bad experiences with root filesystems on SD cards.
What are the downsides, I hear you ask? You might see slower speeds going across the network compared to a high quality SD card, particularly with the Pi 4 and its improved SD card slot. PXE does require an Ethernet cable; WiFi is not enough, so you have that restriction to contend with. And finally, this isn’t a portable option — you are tethered to that network cable while running, and tethered to your network to boot at all.
On the other hand, if you’re doing a permanent or semi-permanent install of a Pi, PXE is absolutely a winner. There are few things worse than dragging a ladder out to access a Pi that’s cooked its SD card, not to mention the possibility that you firewalled yourself out of it. Need to start over with a fresh Raspbian image? Easy, just rebuild it on the PXE server and reboot the Pi remotely.
Tesla have always aimed to position themselves as part automaker, part tech company. Their unique offering is that their vehicles feature cutting-edge technology not available from their market rivals. The company has long touted it’s “full self-driving” technology, and regular software updates have progressively unlocked new functionality in their cars over the years.
The latest “V10” update brought a new feature to the fore – known as Smart Summon. Allowing the driver to summon their car remotely from across a car park, this feature promises to be of great help on rainy days and when carrying heavy loads. Of course, the gulf between promises and reality can sometimes be a yawning chasm.
How Does It Work?
Holding the “Come To Me” button summons the vehicle to the user’s location. Releasing the button stops the car immediately.
Smart Summon is activated through the Tesla smartphone app. Users are instructed to check the vehicle’s surroundings and ensure they have line of sight to the vehicle when using the feature. This is combined with a 200 foot (61 m) hard limit, meaning that Smart Summon won’t deliver your car from the back end of a crowded mall carpark. Instead, it’s more suited to smaller parking areas with clear sightlines.
Once activated, the car will back out of its parking space, and begin to crawl towards the user. As the user holds down the button, the car moves, and will stop instantly when let go. Using its suite of sensors to detect pedestrians and other obstacles, the vehicle is touted to be able to navigate the average parking environment and pick up its owners with ease.
No Plan Survives First Contact With The Enemy
With updates rolled out over the air, Tesla owners jumped at the chance to try out this new functionality. Almost immediately, a cavalcade of videos began appearing online of the technology. Many of these show that things rarely work as well in the field as they do in the lab.
As any driver knows, body language and communication are key to navigating a busy parking area. Whether it’s a polite nod, an instructional wave, or simply direct eye contact, humans have become well-rehearsed at self-managing the flow of traffic in parking areas. When several cars are trying to navigate the area at once, a confused human can negotiate with others to take turns to exit the jam. Unfortunately, a driverless car lacks all of these abilities.
This situation proved all too much for the Tesla, and the owner was forced to intervene.
A great example is this drone video of a Model 3 owner attempting a Smart Summon in a small linear carpark. Conditions are close to ideal – a sunny day, with little traffic, and a handful of well-behaved pedestrians. In the first attempt, the hesitation of the vehicle is readily apparent. After backing out of the space, the car simply remains motionless, as two human drivers are also attempting to navigate the area. After backing up further, the Model 3 again begins to inch forward, with seemingly little ability to choose between driving on the left or the right. Spotting the increasing frustration of the other road users, the owner is forced to walk to the car and take over. In a second attempt, the car is again flummoxed by an approaching car, and simply grinds to a halt, unable to continue. Communication between autonomous vehicles and humans is an active topic of research, and likely one that will need to be solved sooner rather than later to truly advance this technology.
Pulling straight out of a wide garage onto an empty driveway is a corner case they haven’t quite mastered yet.
An expensive repair bill, courtesy of Smart Summon.
Other drivers have had worse experiences. One owner had their Tesla drive straight into the wall of their garage, an embarrassing mistake even most learner drivers wouldn’t make. Another had a scary near miss, when the Telsa seemingly failed to understand its lack of right of way. The human operator can be seen to recognise an SUV approaching at speed from the vehicle’s left, but the Tesla fails to yield, only stopping at the very last minute. It’s likely that the Smart Summon software doesn’t have the ability to understand right of way in parking environments, where signage is minimal and it’s largely left up to human intuition to figure out.
This is one reason why the line of sight requirement is key – had the user let go of the button when first noticing the approaching vehicle, the incident would have been avoided entirely. Much like other self-driving technologies, it’s not always clear how much responsibility still lies with the human in the loop, which can have dire results. And more to the point, how much responsibility should the user have, when he or she can’t know what the car is going to decide to do?
More amusingly, an Arizona man was caught chasing down a Tesla Model 3 in Phoenix, seeing the vehicle rolling through the carpark without a driver behind the wheel. While the embarassing incident ended without injury, it goes to show that until familiarity with this technology spreads, there’s a scope for misunderstandings to cause problems.
It’s Not All Bad, Though
Some users have had more luck with the feature. While it’s primarily intended to summon the car to the user’s GPS location, it can also be used to direct the car to a point within a 200 foot radius. In this video, a Tesla can be seen successfully navigating around a sparsely populated carpark, albeit with some trepidation. The vehicle appears to have difficulty initially understanding the structure of the area, first attempting a direct route before properly making its way around the curbed grass area. The progress is more akin to a basic line-following robot than an advanced robotic vehicle. However, it does successfully avoid running down its owner, who attempts walking in front of the moving vehicle to test its collision avoidance abilities. If you value your limbs, probably don’t try this at home.
No, not like that!
Wanting to explore a variety of straightforward and oddball situations, [DirtyTesla] decided to give the tech a rundown himself. The first run in a quiet carpark is successful, albeit with the car weaving, reversing unnecessarily, and ignoring a stop sign. Later runs are more confident, with the car clearly choosing the correct lane to drive in, and stopping to check for cross traffic. Testing on a gravel driveway was also positive, with the car properly recognising the grass boundaries and driving around them. That is, until the fourth attempt, when the car gently runs off the road and comes to a stop in the weeds. Further tests show that dark conditions and heavy rain aren’t a show stopper for the system, but it’s still definitely imperfect in operation.
Reality Check
Fundamentally, there’s plenty of examples out there that suggest this technology isn’t ready for prime-time. Unlike other driver-in-the-loop aids, like parallel parking assists, it appears that users put a lot more confidence in the ability of Smart Summon to detect obstacles on its own, leading to many near misses and collisions.
If all it takes is a user holding a button down to drive a 4000 pound vehicle into a wall, perhaps this isn’t the way to go. It draws parallels to users falling asleep on the highway when using Tesla’s AutoPilot – drivers are putting ultimate trust in a system that is, at best, only capable when used in combination with a human’s careful oversight. But even then, how is the user supposed to know what the car sees? Tesla’s tools seem to have a way of lulling users into a false sense of confidence, only to be betrayed almost instantly to the delight of Youtube viewers around the world.
While it’s impossible to make anything truly foolproof, it would appear that Tesla has a ways to go to get Smart Summon up to scratch. Combine this with the fact that in 90% of videos, it would have been far quicker for an able-bodied driver to simply walk to the vehicle and drive themselves, and it definitely appears to be more of a gimmick than a useful feature. If it can be improved, and limitations such as line-of-sight and distance can be negated, it will quickly become a must-have item on luxury vehicles. That may yet be some years away, however. Watch this space, as it’s unlikely other automakers will rest for long!
Journey with me to a time in a faraway internet; a time before we had monetized social media. A time when the page you shared with your friends was your page and not a page on someone’s network. Way back when Visual Basic was what Python is now and JavaScript was a hack mostly used for cool effects. A hero arose. Macromedia Flash opened the gates to the interactive web, and for a chunk of time it consumed more than a decent portion of humanity’s attention and artistic output.
Computer art was growing, but was it public? How many grandmothers would see a demo?
New grounds were paved and anyone who wanted to become an animator or a web designer could manage it in a few tutorials. Only a few years before Flash took off, people had started talking about computers as a source for art in mostly theoretical terms. There were demoscenes, university studies, and professional communities, of course, but were they truly public? Suddenly Flash made computer art an everyday thing. How could computers not be used for art? In schools and offices all over the world people of varying technical skill would get links to games, animation, and clever sites sent by their friends and colleagues.
For 23 years Flash has had this incredible creative legacy. Yet it’s not perfect by any means. It’s a constant headache for our friendly neighborhood super-conglomerates. Apple hates how it drains the battery on their mobile devices, and that it’s a little village outside of their walled garden. Microsoft sees it as another endless security violation. They all saw it as a competitor product eating their proprietary code bases. Continue reading “Blend Your Last Frogs. Google Turns A Blind Eye To Flash.”→