I just had my car in for an inspection and an oil change. The garage I take my car to is generally okay, they’re more honest than a stealership, but they don’t cross all their t’s and dot all their lowercase j’s. A few days after I picked up my car, low and behold, I noticed the garage didn’t do a complete oil change. The oil life indicator wasn’t reset, which means every time I turn my car on, I’ll have to press a button to clear an ominous glowing warning on my dash.
For my car, resetting the oil life indicator is a simple fix – I just need to push the button on the dash until the oil life indicator starts to blink, release, then hold it again for ten seconds. I’m at least partially competent when it comes to tech and embedded systems, but even for me, resetting the oil life sensor in my car is a bit obtuse. For the majority of the population, I can easily see this being a reason to take a car back to the shop; the mechanic either didn’t know how to do it, or didn’t know how to use Google.
The two most technically complex things I own are my car and my computer, and there is much more information available on how to fix or modify any part of my computer. If I had a desire to modify my car so I could read the value of the tire pressure monitors, instead of only being notified when one of them is too low, there’s nowhere for me to turn.
2015 was the year of car hacks, ranging from hacking ECUs to pass California emissions control standards, Google and Tesla’s self-driving cars, to hacking infotainment systems to drive reporters off the road. The lessons learned from these hacks are a hodge-podge of forum threads, conference talks, and articles scattered around the web. While you’ll never find a single volume filled with how to exploit the computers in every make and model of automobile, there is space for a reference guide on how to go about this sort of car hacking.
I was given the opportunity to review The Car Hacker’s Handbook by Craig Smith (259p, No Starch Press). Is it a guide on how to plug a dongle into my car and clear the oil life monitor the hard way? No, but you wouldn’t want that anyway. Instead, it’s a much more informative tome on penetration testing and reverse engineering, using cars as the backdrop, not the focus.
We know what you’re thinking. There’s no way an 8 watt USB-powered soldering iron could be worth the $5 it commands on eBay. That’s what [BigClive] thought too, so he bought one, put the iron through a test and teardown, and changed his mind. Can he convince you too?
Right up front, [BigClive] finds that the iron is probably not suitable for some jobs. Aside its obvious unsuitability for connections that take a lot of heat, there’s the problem of leakage current when used with a wall-wart USB power supply. The business end of the iron ends up getting enough AC leak through the capacitors of the power supply to potentially damage MOSFETs and the like. Then again, if you’re handy to an AC outlet, wouldn’t you just use a Hakko? Seems like the iron is best powered by a USB battery pack, and [BigClive] was able to solder some surprisingly beefy connections that way. The teardown and analysis reveal a circuit that looks like it came right out of a [Forrest M. Mims III] book. We won’t spoil the surprise for you – just watch the video below.
While not truly cordless like this USB-rechargeable iron, we’d say that for the price, this is a pretty capable iron for certain use cases. Has anyone else tried one of these? Chime in on the comments and let us know what you think.
Even before the announcement and introduction of the Raspberry Pi 3, word of a few very powerful single board ARM Linux computers was flowing out of China. The hardware was there – powerful 64-bit ARM chips were available, all that was needed was a few engineers to put these chips on a board, a few marketing people, and a contract manufacturer.
One of the first of these 64-bit boards is the Pine64. Introduced to the world through a Kickstarter that netted $1.7 Million USD from 36,000 backers, the Pine64 is already extremely popular. The boards are beginning to land on the doorsteps and mailboxes of backers, and the initial impressions are showing up in the official forums and Kickstarter campaign comments.
I pledged $15 USD to the Pine64 Kickstarter, and received a board with 512MB of RAM, 4K HDMI, 10/100 Ethernet and a 1.2 GHz ARM Cortex A53 CPU in return. This post is not a review, as I can’t fully document the Pine64 experience. My initial impression? This is bad. This is pretty bad.
The Flir One thermal camera caused quite a stir when it was launched back in 2014. Both the Flir One and its prime competitor Seek Thermal represented the first “cheap” thermal cameras available to the public. At the heart of the Flir One was the Lepton module, which could be purchased directly from Flir Systems, but only in quantity. [Mike Harrison] jumped on board early, cutting into his Flir One and reverse engineering the Lepton module within, including the SPI data required to talk to it. He even managed to create the world’s smallest thermal imager using a the TFT screen from an Ipod Nano.
A few things have changed since then. You can buy Lepton modules in single quantity at DigiKey now. Flir also introduced a second generation of the Flir One. This device contains an updated version of the Lepton. The new version has a resolution of 160 x 120 pixels, doubled from the original module. There are two flavors: The iOS version with a lightning port, and an Android version with a micro USB connector. I’m an Android user myself, so this review focuses on the Android edition.
The module itself is smaller than I expected. It comes with a snap-on case and a lanyard. While you’ll look a bit like a dork wearing the lanyard, it does come in handy to keep the imager from getting lost or dropped. The Flir One has an internal battery, which of course needs to be topped off before it can be used. Mine charged up in about half an hour.
Just over a year ago, Particle (formerly Spark), makers of the very popular Core and Particle Photon WiFi development kits, released the first juicy tidbits for a very interesting piece of hardware. It was the Electron, a cheap, all-in-one cellular development kit with an even more interesting data plan. Particle would offer their own cellular service, allowing their tiny board to send or receive 1 Megabyte for $3.00 a month, without any contracts.
Thousands of people found this an interesting proposition and the Electron crowdfunding campaign took off like a rocket. Now, after a year of development and manufacturing, these tiny cellular boards are finally shipping out to backers and today the Electron officially launches.
Particle was kind enough to provide Hackaday with an Electron kit for a review. The short version of this review is the Electron is a great development platform, but Particle pulled off a small revolution in cellular communications and the Internet of Things
Last week, the Nvidia Jetson TX1 was released. This credit card-sized module is a ‘supercomputer’ advertised as having more processing power than the latest Intel Core i7s, while running at under 10 Watts. This is supposedly the device that will power the next generation of things, using technologies unheard of in the embedded world.
A modern day smartphone could have been built 10 or 15 years ago. There’s no question the processing power was there with laptop CPUs, and the tiny mechanical hard drives in the original iPod was more than spacious enough to hold a library of Napster’d MP3s and all your phone contacts. The battery for this sesquidecadal smartphone, on the other hand, was impossible. The future depends on batteries and consequently low power computing. Is the Jetson TX1 the board that will deliver us into the future? It took a hands-on look to find out.
What is the TX1
The Jetson TX1 is a tiny module – 50x87mm – encased in a heat sink that brings the volume to about the same size as a pack of cigarettes. Underneath a block of aluminum is an Nvidia Tegra X1, a module that combines a 64-bit quad-core ARM Cortex-A57 CPU with a 256-core Maxwell GPU. The module is equipped with 4GB of LPDDR4-3200, 16GB of eMMC Flash, 802.11ac WiFi, and Bluetooth.
This module connects to the outside world through a 400-pin connector (from Samtec, a company quite liberal with product samples, by the way) that provides six CSI outputs for a half-dozen Raspberry Pi-style cameras, two DSI outputs, 1 eDP 1.4, 1 eDP 1.2, and HDMI 2.0 for displays. Storage is provided through either SD cards or SATA. Other ports include three USB 3.0, three USB 2.0, Gigabit Ethernet, a PCIe x1 and PCIe x4, and a host of GPIOs, UARTs, SPI and I2C busses.
The only way of getting at all these extra ports is, at the moment, the Jetson TX1 carrier board, a board that is effectively a MiniITX motherboard. Mount this carrier board in a case, modify a power supply and figure out how to wire up the front panel buttons, and you’ll have a respectable desktop computer.
This is not a desktop computer, though, and it’s not a replacement for a Raspberry Pi or Beaglebone. This is an engineering tool – a device built to handle the advanced robotics work of the future.
Benchmarks
No tech review would be complete without benchmarks, and since this is an Nvidia board, that means a deep dive into the graphics performance.
The review unit Nvidia sent over came with an incredible amount of documentation, pointing me towards GFXBench 4.0 Manhattan 3.1 (and the T-rex one) to test the graphics performance.
In terms of graphics performance, the TX1 isn’t that much different from a run-of-the-mill mobile chipset from a few years ago. This is to be expected; it’s unreasonable to expect Nvidia to put a Titan in a 10 Watt module; the Titan itself sucks up about 250 Watts.
What about CPU performance? The ARM Cortex A57 isn’t seen very much in tiny credit-card sized dev boards, but there are a few actual products out there with it. The TX1 isn’t a powerhouse by any means, but it does trounce the Raspberry Pi 2 Model B in testing by a factor of about three.
Compared to desktop/x86 performance, the best benchmarks again put the Nvidia TX1 in the same territory as a middling desktop from a few years ago. Still, that desktop probably draws about 300 W total, where the TX1 sips a meager 10 W.
This is not the board you want if you’re mining Bitcoins, and it’s not the board you should use if you need a powerful, portable device that can connect to anything. It’s for custom designs. The Nvidia TX1 is a module that’s meant to be integrated into products. It’s not a board for ‘makers’ and it’s not designed to be. It’s a board for engineers that need enough power in a reasonably small package that doesn’t drain batteries.
With an ARM Cortex A57 quad core running at almost 2 GHz, 4 GB of RAM, and a reasonably powerful graphics card for the power budget, the Nvidia TX1 is far beyond the usual tiny Linux boards. It’s far beyond the Raspi, the newest Beagleboard, and gives the Intel NUC boards a run for their money.
In terms of absolute power, the TX1 is about as powerful as a entry-level laptop from three or four years ago.
The Jetson TX1 is all about performance per Watt. That’s exceptional, new, and exciting; it’s something that simply hasn’t been done before. If you believe the reams of technical documents Nvidia granted me access to, it’s the first step to a world of truly smart embedded devices that have a grasp on computer vision, machine learning, and a bunch of other stuff that hasn’t really found its way into the embedded world yet.
And here lies the problem with the Jetson TX1; because a platform like this hasn’t been available before, the development stack, examples, and community of users simply isn’t there yet. The number of people contributing to the Nvidia embedded systems forum is tiny – our Hackaday articles get more comments than a thread on the Nvidia forums. Like all new platforms, the only thing missing is the community, putting Nvidia in a chicken and egg scenario.
This a platform for engineers. Specifically, engineers who are building autonomous golf carts and cars, quadcopters that follow you around, and robots that could pass a Turing test for at least 30 seconds. It’s an incredible piece of hardware, but not one designed to be a computer that sits next to a TV. The TX1 is an engineering tool that’s meant to go into other devices.
Alternative Applications, Like Gamecube
With that said, there are a few very interesting applications I could see the TX1 being used for. My car needs a new head unit, and building one with the TX1 would future proof it for at least another 200,000 miles. For the very highly skilled amateur engineers, the TX1 module opens a lot of doors. Six webcams is something a lot of artists would probably like to experiment with, and two DSI outputs – and a graphics card – would allow for some very interesting user interfaces.
That said, the TX1 carrier board is not the breakout board for these applications. I’d like to see something like what Sparkfun put together for the Intel Edison – dozens of breakout boards for every imaginable use case. The PCB files for the TX1 carrier board are available through the Nvidia developer’s portal (hope you like OrCAD), and Samtec, the supplier for the 400-pin connector used for the module, is exceedingly easy to work with. It’s not unreasonable for someone with a reflow toaster oven to create a breakout for the TX1 that’s far more convenient than a Mini-ITX motherboard.
Right now there aren’t many computers with ARM processors and this amount of horsepower out now. Impressively powerful ARM boards, such as the new BeagleBoard X15 and those that follow the 96Boards specification exist, but these do not have a modern graphics card baked into the module.
Without someone out there doing the grunt work of making applications with mass appeal work with the TX1, it’s impossible to say how well this board performs at emulating a GameCube, or any other general purpose application. The hardware is probably there, but the reviewers for the TX1 have been given less than a week to StackOverflow their way through a compatible build for the most demanding applications this board wasn’t designed for.
It’s all about efficiency
Is the TX1 a ‘supercomputer on a module’? Yes, and no. While it does perform reasonably well at machine learning tasks compared to the latest core-i7 CPUs, the Alexnet machine learning tasks are a task best suited for GPUs. It’s like asking which flies better: a Cessna 172 or a Bugatti Veyron? The Cessna is by far the better flying machine, but if you’re looking for a ‘supercomputer’, you might want to look at a 747 or C-5 Galaxy.
On the other hand, there aren’t many boards or modules out there at the intersection of high-powered ARM boards with a GPU and on a 10 Watt power budget. It’s something that’s needed to build the machines, robots, and autonomous devices of the future. But even then it’s still a niche product.
I can’t wait to see a community pop up around the TX1. With a few phone calls to Samtec, a few hours in KiCad, and a group buy for the module itself ($299 USD in 1000 unit quantities), this could be the start of something very, very interesting.
On February 25, 1991, during the eve of the of an Iraqi invasion of Saudi Arabia, a Scud missile fired from Iraqi positions hit a US Army barracks in Dhahran, Saudi Arabia. A defense was available – Patriot missiles had intercepted Iraqi Scuds earlier in the year, but not on this day.
The computer controlling the Patriot missile in Dhahran had been operating for over 100 hours when it was launched. The internal clock of this computer was multiplied by 1/10th, and then shoved into a 24-bit register. The binary representation of 1/10th is non-terminating, and after chopping this down to 24 bits, a small error was introduced. This error increased slightly every second, and after 100 hours, the system clock of the Patriot missile system was 0.34 seconds off.
A Scud missile travels at about 1,600 meters per second. In one third of a second, it travels half a kilometer, and well outside the “range gate” that the Patriot tracked. On February 25, 1991, a Patriot missile would fail to intercept a Scud launched at a US Army barracks, killing 28 and wounding 100 others. It was the first time a floating point error had killed a person, and it certainly won’t be the last.