On September 21, “Premium” 0day startup Zerodium put out a call for a chain of exploits, starting with a browser, that enables the phone to be remotely jailbroken and arbitrary applications to be installed with root / administrator permissions. In short, a complete remote takeover of the phone. And they offered $1 million. A little over a month later, it looks like they’ve got their first claim. The hack has yet to be verified and the payout is actually made.
But we have little doubt that the hack, if it’s actually been done, is worth the money. The NSA alone has a $25 million annual budget for buying 0days and usually spends that money on much smaller bits and bobs. This hack, if it works, is huge. And the NSA isn’t the only agency that’s interested in spying on folks with iPhones.
Indeed, by bringing something like this out into the open, Zerodium is creating a bidding war among (presumably) adversarial parties. We’re not sure about the ethics of all this (OK, it’s downright shady) but it’s not currently illegal and by pitting various spy agencies (presumably) against each other, they’re almost sure to get their $1 million back with some cream on top.
We’ve seen a lot of bug bounty programs out there. Tossing “firmname bug bounty” into a search engine of your choice will probably come up with a hit for most firmnames. A notable exception in Silicon Valley? Apple. They let you do their debugging work for free. How long this will last is anyone’s guess, but if this Zerodium deal ends up being for real, it looks like they’re severely underpaying.
And if you’re working on your own iPhone remote exploits, don’t be discouraged. Zerodium still claims to have money for two more $1 million payouts. (And with that your humble author shrugs his shoulders and turns the soldering iron back on.)
My article on Fortran, This is Not Your Father’s FORTRAN, brought back a lot of memories about the language. It also reminded me of other languages from my time at college and shortly thereafter, say pre-1978.
At that time there were the three original languages – FORTRAN, LISP, and COBOL. These originals are still used although none make the lists of popular languages. I never did any COBOL but did some work with Pascal, Forth, and SNOBOL which are from that era. Of those, SNOBOL quickly faded but the others are still around. SNOBOL was a text processing language that basically lost out to AWK, PERL, and regular expressions. Given how cryptic regular expressions are it’s amazing another language from that time, APL – A Programming Language, didn’t survive. APL was referred to as a ‘write only language’ because it was often easier to simply rewrite a piece of code than to debug it.
Another language deserving mention is Algol, if only because Pascal is a descendant, along with many modern languages. Algol was always more popular outside the US, probably because everyone there stuck with FORTRAN.
Back then certain books held iconic status, much like [McCracken’s] black FORTRAN IV. In the early 70s, mentioning [Nicolas Wirth] or the yellow book brought to mind Pascal. Similarly, [Griswold, (R. E.)] was SNOBOL and a green book. For some reason, [Griswold’s] two co-authors never were mentioned, unlike the later duo of [Kernighan] & [Ritchie] with their white “The C Programming Language”. Seeing that book years later on an Italian coworker’s bookshelf translated to Italian gave my mind a minor boggling. Join me for a walk down the memory lane that got our programming world to where it is today.
The Hackaday SuperConference is just eleven short days from now! We’ve put together a conference that is all about hardware creation with a side of science and art. Join hundreds of amazing people along with Hackaday crew for a weekend of talks, workshops, and socializing.
Below you will find the full slate of talks, and last week we revealed the lineup of hands-on workshops. We’ve expanded a few of the more popular workshops. If you previously tried to get a ticket and found they were sold out, please check again. We know many of you are working on impressive projects in your workshops, so bring them and sign up for a lightning talk at registration.
The laptop I’m using, found for 50 bucks in the junk bins of Akihabara has a CPU that runs at 2.53GHz. Two billion five hundred and thirty million times every second electrons systematically briefly pulse. To the human mind this is unimaginable, yet two hundred years ago humanity had no knowledge of electrical oscillations at all.
There were clear natural sources of oscillation of course, the sun perhaps the clearest of all. The Pythagoreans first proposed that the earth’s rotation caused the suns daily cycle. Their system was more esoteric and complex than the truth as we now know it and included a postulated Counter-Earth, lying unseen behind a central fire. Regardless of the errors their theory contained, a central link was made between rotation and oscillation.
And rotational motion was exploited in early electrical oscillators. Both alternators, similar to those in use today, and more esoteric devices like the interrupter. Developed by Charles Page in 1838, the interrupter used rocking or rotational motion to dip a wire into a mercury bath periodically breaking a circuit to produce a simple oscillation.
As we progressed toward industrial electrical generators, alternating current became common. But higher and higher frequencies were also required for radio transmitters. The first transmitters had used spark gaps. These simple transmitters used a DC supply to charge a capacitor until it reached the breakdown voltage of the gap between two pieces of wire. The electricity then ionized the air molecules in the gap. Thus allowing current to flow, quickly discharging the capacitor. The capacitor charged again, allowing the process to repeat.
As you can see and hear in the video above spark gaps produce a noisy, far from sinusoidal output. So for more efficient oscillations, engineers again resorted to rotation.
The Alexanderson alternator uses a wheel on which hundreds of slots are cut. This wheel is placed between two coils. One coil, powered by a direct current, produces a magnetic field inducing a current in the second. The slotted disc, periodically cutting this field, produces an alternating current. Alexanderson alternators were used to generate frequencies of 15 to 30 KHz, mostly for naval applications. Amazingly one Alexanderson alternator remained in service until 1996, and is still kept in working condition.
A similar principal was used in the Hammond organ. You may not know the name, but you’ll recognize the sound of this early electronic instrument:
The Hammond organ used a series of tone wheels and pickups. The pickups consist of a coil and magnet. In order to produce a tone the pickup is pushed toward a rotating wheel which has bumps on its surface. These are similar to the slots of the Alexanderson Alternator, and effectively modulate the field between the magnet and the coil to produce a tone.
Amplifying the Oscillation
The operation of a tank circuit (from wikipedia)
So far we have purely relied on electromechanical techniques, however amplification is key to all modern oscillators, for which of course you require active devices. The simplest of these uses an inductor and capacitor to form a tank circuit. In a tank circuit energy sloshes back and forth between an inductor and capacitor. Without amplification, losses will cause the oscillation to quickly die out. However by introducing amplification (such as in the Colpitts oscillator) the process can be kept going indefinitely.
Oscillator stability is important in many applications such as radio transmission. Better oscillators allow transmissions to be packed more closely on the spectrum without fear that they might drift and overlap. So the quest for better, more stable oscillators continued. Thus the crystal oscillator was discovered, and productionized. This was a monumental effort.
Producing Crystal Oscillators
The video below shows a typical process used in the 1940s for the production of crystal oscillators:
Natural quartz crystals mined in Brazil were shipped to the US, and processed. I counted a total of 13 non-trivial machining/etching steps and 16 measurement steps (including rigorous quality control). Many of these quite advanced, such as the alignment of the crystal under an X-Ray using a technique similar to X-Ray crystalography.
These days our crystal oscillator production process is more advanced. Since the 1970s crystal oscillators have been fabricated in a photolithographic process. In order to further stabilize the crystal additional techniques such as temperature compensation (TCXO) or operating the crystal at a temperature controlled by the use of a heating element (OCXO) have been employed. For most applications this has proved accurate enough… Not accurate enough however for the timenuts.
Timenuts Use Atoms
Typical timenut wearing atomic wristwatch
For timenuts there is no “accurate enough”. These hackers strive to create the most accurate timing systems they can, which all of course rely on the most accurate oscillator they can devise.
Many timenuts rely on atomic clocks to make their measurements. Atomic clocks are an order of magnitude more precise than even the best temperature controlled crystal oscillators.
Bill Hammack has a great video describing the operation of a cesium beam oscillator. The fundamental process is shown in the image below. The crux is that cesium gas exists in two energy states, which can be separated under a magnetic field. The low energy atoms are exposed to a radiation source, the wavelength of which is determined by a crystal oscillator. Only a wavelength of exactly 9,192,631,770Hz will convert the low energy cesium atoms to the high energy form. The high energy atoms are directed toward a detector, the output of which is used to discipline the crystal oscillator, such that if the frequency of the oscillator drifts and the cesium atoms are no longer directed toward the detector its output is nudged toward the correct value. Thus a basic physical constant is used to calibrate the atomic clock.
The basic operating principle of a cesium atomic clock
While cesium standards are the most accurate oscillators known, Rubidium oscillators (another “atomic” clock) also provide an accurate and relatively cheap option for many timenuts. The price of these oscillators has been driven down due to volume production for the telecoms industry (they are key to GSM and other mobile radio systems) and they are now readily available on eBay.
With accurate time pieces in hand timenuts have performed a number of interesting experiments. To my mind the most interesting of these is measuring time differences due to relativistic effects. As is the case with one timenut who took his family and a car full of atomic clocks up Mt. Rainier for the weekend. When he returned he was able to measure a 20 nanosecond difference between the clocks he took on the trip and those he left at home. This time dilation effect was almost exactly as predicted by the theory of relativity. An impressive result and an amazing family outing!
It’s amazing to think that when Einstein proposed the theory of special relatively in 1905, even primitive crystal oscillators would not have been available. Spark gap, and Alexanderson alternators would still have been in everyday use. I doubt he could imagine that one day the fruits of his theory would be confirmed by one man, on a road trip with his kids as a weekend hobby project. Hackers of the world, rejoice.
Most electronic components available today are just improved versions of what was available a few years ago. Microcontrollers get faster, memories get larger, and sensors get smaller, but we haven’t seen a truly novel component for years or even decades. There is no electronic component more interesting with more novel applications than the memristor, and now they’re available commercially from Knowm, a company that is on the bleeding edge of putting machine learning directly onto silicon.
The entire point of digital circuits is to store information as a series of ones and zeros. Memristors as well store information, but do so in a completely analog way. Each memristor changes its own resistance in response to the current going through it; ‘writing’ a positive voltage lowers the resistance, and ‘writing’ a negative voltage puts the device back into a high resistance state.
Cross section of the metal chalcogenide memristor. Source: Knowm.org
This new memristor is based on research done by [Dr. Kris Campbell] of Boise State University – the same researcher responsible for silver chalcogenide memristors we saw earlier this year. Like these earlier devices, the Knowm memristror is built using silver chalcogenide molecules. To lower the resistance of the memristor, a positive voltage ‘pulls’ silver ions into the metal chalcogenide layer. The silver ions stay in this chalcogenide layer until they are ‘pushed’ back with the application of a negative voltage. This gives the memristor it’s core functionality – being able to remember how much current has gone through it.
This technology is different from the first memristors made by HP in 2008, and has allowed Knowm to create functional memristors on silicon with a relatively high yield. Knowm is currently selling a ‘tier 3’ memristor part that only has two out of eight devices failing QC testing. A ‘tier 1’ part, with all eight memristors working, is available for $220 USD.
As for applications for this memristor, Knowm is using this technology in something they call Thermodynamic RAM, or kT-RAM. This is a small coprocessor that allows for faster machine learning than would be possible with a computer with a much more traditional architecture. This kT-RAM uses a binary tree layout with memristors serving as the links between nodes.
While it’s much too soon to say if a kT-RAM processor will be better or more efficient at performing machine learning tasks in real life, a machine learning coprocessor does have a faint echo of the machine learning silicon developed during the 80s AI renaissance. Thirty years ago, neural nets on a chip were created by a few companies around Boston, until someone realized these neural nets could be simulated on a desktop PC much more efficiently. The kT-RAM is somewhat novel and highly parallel, though, and with a new electronic component it could be just what is needed to push machine learning directly into silicon.
[Furrtek] is a person of odd pursuits, which mainly involve making old pieces of technology do strange things. That makes him a hero to us, and his latest project elevates this status: he built a device that turns the Nintendo Gameboy camera cartridge into a camcorder. His device replaces the Gameboy, capturing the images from the camera, displaying them on the screen and saving them to a micro SD card.
Before you throw out your cellphone or your 4K camcorder, bear in mind that the captured video is monochrome (with only 4 levels between white and black), at a resolution of 128 by 112 pixels and at about 14 frames per second. Sound is captured at 8192Hz, producing the same buzzy, grainy sound that the Gameboy is famous for. Although it isn’t particularly practical, [Furrtek]s build is extremely impressive, built around an NXP LPC1343 ARM Cortex-M3 MCU processor. This processor repeatedly requests an image from the camera, receives the image and then collects the images and sound together to form the video and save it to the micro SD card. As always, [Furrtek] has made all of the source code and other files available for anyone who wants to try it out.
If you ever find yourself swapping between a mix of audio inputs and outputs and get tired of plugging cables all the time, check out [winslomb]’s audio multiplexer with integrated amplifier. The device can take any one of four audio inputs, pass the signal through an amplifier, and send it to any one of four outputs.
The audio amplifier has a volume control, and the inputs and outputs can be selected via button presses. An Arduino Pro Mini takes care of switching the relays based on the button presses. On the input side, you can plug in devices like a phone, TV, digital audio player or a computer. The output can be fed to speakers, headsets or earphones.
At the center of the build lies a TI TPA15275-mW stereo audio power amplifier. This audio op-amp is designed to drive 32 ohm loads, so performance might suffer when connecting it to lower impedance devices, but it seems to work fine for headphones and small computer speakers. The dual-gang potentiometer controls the volume, and the chip has a useful de-pop feature. The circuit is pretty much a copy of the reference shown in the data sheet. Switching between inputs or outputs is handled by a bank of TLP172A solid state relays with MOSFET outputs, and it’s all tied together with a micro-controller, allowing for WiFi or BLE functionality to be added on later.
[winslomb] laid out the design using Eagle and he made a couple of footprint mistakes for the large capacitors and the opto-relays. (As he says, always double-check part footprints!) In the end, he solder-bridged them on to the board, but they should probably be fixed for the next revision.
[winslomb] built the switch as his capstone project while on his way to getting a Masters in EE, and although the device did function as required, there is still room for improvement. The GitHub repository contains all the hardware and software sources. Check out the video below where he walks through a demo of the device in action. If you are looking for something simpler, here is a two input – one output audio switcher with USB control and on the other end of the spectrum, here’s an audio switch that connects to the Internet.