It is not often that you look for one of your heroes on the Internet and by chance encounter another from a completely different field. But if you are a fan of the inimitable silent movie star [Buster Keaton] as well as being the kind of person who reads Hackaday then that could have happened to you just as it did here.
Our subject today is a 1957 episode of CBS’s TV game show I’ve Got a Secret! in which [Keaton] judges a pie-eating contest and is preceded first by a young man with a penchant for snakes and then rather unexpectedly by a true giant of twentieth century technology.
[Philo T Farnsworth] was a prolific engineer who is probably best known as the inventor of electronic television, but whose work touched numerous other fields. Surprisingly this short segment on an entertainment show was his only appearance on the medium to which his invention helped give birth. In it he baffles the panel who fail to guess his claim to fame, before discussing his inventions for a few minutes. He is very effacing about his achievement, making the point that the development of television had been a cumulative effort born of many contributors. He then goes on to discuss the future of television, and talks about 2000-line high-definition TV with a reduced transmission bandwidth, and TV sets like picture frames. All of which look very familiar to us nearly sixty years later in the early 21st century.
The full show is below the break, though [Farnsworth]’s segment is only from 13:24 to 21:24. It’s very much a show of its time with its cigarette product placement and United Airlines boasting about their piston-engined DC-7 fleet, but it’s entertaining enough.
As technology advances, finding the culprit in a malfunctioning device has become somewhat more difficult. As an example, troubleshooting an AM radio is pretty straightforward. There are two basic strategies. First, you can inject a signal in until you can hear it. Then you work backwards to find the stage that is bad. The other way is to trace a signal using a signal tracer or an oscilloscope. When the signal is gone, you’ve found the bad stage. Of course, you still need to figure out what’s wrong with the stage, but that’s usually one or two transistors (or tubes) and a handful of components.
A common signal injector was often a square wave generator that would generate audio frequencies and radio frequency harmonics. It was common to inject at the volume control (easy to find) to determine if the problem was in the RF or audio sections first. If you heard a buzz, you worked backwards into the RF stages. No buzz indicated an audio section problem.
A signal tracer was nothing more than an audio amplifier with a diode demodulator. Starting at the volume control was still a good idea. If you heard radio stations through the signal tracer, the RF section was fine. Television knocked radio off of its pedestal as the primary form of information and entertainment in most households, and thus the TV repair industry was created.
The ESP8266 is well known as an incredibly small and cheap WiFi module. But the silicon behind that functionality is very powerful, far beyond its intended purpose. I’ve been hacking different uses for the board and my most recent adventure involves generating color video from the chip. This generated video may be wired to your TV, or you can broadcast it over the air!
I’ve been tinkering with NTSC, the North American video standard that has fairly recently been superseded by digital standards like ATSC. Originally I explored pumping out NTSC with AVRs, which lead to an entire let’s learn, let’s code series. But for a while, this was on the back-burner, until I decided to see how fast I could run the ESP8266’s I2S bus (a glorified shift register) and the answer was 80 MHz. This is much faster than I expected. Faster than the 1.41 MHz used for audio (its intended purpose), 2.35 MHz used for controlling WS2812B LEDs or 4 MHz used to hopefully operate a reprap. It occasionally glitches at 80 MHz, however, it still works surprisingly well!
The coolest part of using the chip’s I2S bus is the versatile DMA engine connected to it. Data blocks can be chained together to seamlessly shift the data out, and interrupts can be generated upon a block’s completion to fill it in with new data. This allows the creation of a software defined bitstream in an interrupt.
Why NTSC? If I lived in Europe, it would have been PAL. The question you’re probably thinking is: “Why a dead standard?” And there’s really three reasons.
Showing any sort of ‘hacking’ on either the big screen or the small often ends in complete, abject failure. You only need to look at Hackers with its rollerblading PowerBooks, Independance Day where the aliens are also inexplicably using PowerBooks, or even the likes of Lawnmower Man with a VR sex scene we keep waiting for Oculus to introduce. By design, Mr Robot, a series that ended its first season on USA a month ago, bucks this trend. It does depressed, hoodie-wearing, opioid-dependant hackers right, while still managing to incorporate some interesting tidbits from the world of people who call themselves hackers.
In episode 0 of Mr Robot, we’re introduced to our hiro protagonist [Elliot], played by [Rami Malek], a tech at the security firm AllSafe. We are also introduced to the show’s Macbeth, [Tyrell Wellick], played by Martin Wallström]. When these characters are introduced to each other, [Tyrell] notices [Elliot] is using the Gnome desktop on his work computer while [Tyrell] says he’s, “actually on KDE myself. I know [Gnome] is supposed to be better, but you know what they say, old habits, they die hard.”
While this short exchange would appear to most as two techies talking shop, this is a scene with a surprisingly deep interpretation. Back in the 90s, when I didn’t care if kids stayed off my lawn or not, there was a great desktop environment war in the land of Linux. KDE was not free, it was claimed by the knights of GNU, and this resulted in the creation of the Gnome.
Subtle, yes, but in one short interaction between [Elliot] and [Tyrell], we see exactly where each is coming from. [Elliot] stands for freedom of software and of mind, [Tyrell] is simply toeing the company line. It’s been fifteen years since message boards have blown up over the Free Software Foundation’s concerns over KDE, but the sentiment is there.
There’s far more to a hacker ethos than having preferred Linux desktop environments. Hacking is everywhere, and this also includes biohacking, In the case of one Mr Robot character, this means genetic engineering.
In one episode of Mr Robot, the character Romero temporarily gives up his power in front of a keyboard and turns his mind to genetics. He “…figured out how to insert THC’s genetic information code into yeast cells.” Purely from a legal standpoint, this is an interesting situation; weed is illegal, yeast is not, and the possibilities for production are enormous. Yeast only requires simple sugars to divide and grow in a test tube, marijuana actually requires a lot of resources and an experienced staff to produce a good crop.
The promise of simply genetically modifying yeast to produce THC is intriguing; a successful yeast-based grow room could outproduce any plant-based operation, with the only input being sugar. Alas, the reality of the situation isn’t quite that simple. Researchers at Hyasynth Bio have only engineered yeast to turn certain chemical precursors into THC. Making THC from yeast isn’t yet as simple as home brewing an IPA, but it’s getting close, and a great example of how Mr Robot is tapping into hacking, both new and old.
Why Aren’t We Arguing More About This?
The more we ruminate on this show, the more there is to enjoy about it. It’s the subtle background that’s the most fun; the ceiling of the chapel as it were. We’re thinking of turning out a series of posts that works through all the little delights that you might have missed. For those who watched and love the series, what do you think? Perhaps there are other shows worthy of this hacker drill-down, but we haven’t found them yet.
Satellite television is prevalent in Europe and Northern Africa. This is delivered through a Set Top Box (STB) which uses a card reader to decode the scrambled satellite signals. You need to buy a card if you want to watch. But you know how people like to get something for nothing. This is being exploited by hackers and the result is millions of these Set Top Boxes just waiting to form into botnets.
This was the topic of [Sofiane Talmat’s] talk at DEF CON 23. He also gave this talk earlier in the week at BlackHat and has published his slides (PDF).
The Hardware in Satellite receivers is running Linux. They use a card reader to pull in a Code Word (CW) which decodes the signal coming in through the satellite radio.
An entire black market has grown up around these Code Words. Instead of purchasing a valid card, people are installing plugins from the Internet which cause the system to phone into a server which will supply valid Code Words. This is known as “card sharing”.
On the user side of things this just works; the user watches TV for free. It might cause more crashes than normal, but the stock software is buggy anyway so this isn’t a major regression. The problem is that now these people have exposed a network-connected Linux box to the Internet and installed non-verified code from unreputable sources to run on the thing.
[Sofiane] demonstrated how little you need to know about this system to create a botnet:
Build a plugin in C/C++
Host a card-sharing server
Botnet victims come to you (profit)
It is literally that easy. The toolchain to compile the STLinux binaries (gcc) is available in the Linux repos. The STB will look for a “bin” directory on a USB thumb drive at boot time, the binary in that folder will be automatically installed. Since the user is getting free TV they voluntarily install this malware.
Every now and then a remote control acts up. Maybe you are trying to change the channel on your television and it’s just not working. A quick way to determine if the remote control is still working is by using a cell phone camera to try to see if the IR LED is still lighting up. That can work sometimes but not always. [Rui] had this problem and he decided to build his own circuit to make it easier to tell if a remote control was having problems.
The circuit uses a Vishay V34836 infrared receiver to pick up the invisible signals that are sent from a remote control. A Microchip 12F683 processes the data and has two main output modes. If the remote control is receiving data continuously, then a green LED lights up to indicate that the remote is functioning properly. If some data is received but not in a continuous stream, then a yellow LED lights up instead. This indicates that the batteries on the remote need to be replaced.
The circuit also includes a red LED as a power indicator as well as RS232 output of the actual received data. The PCB was cut using a milling machine. It’s glued to the top of a dual AAA battery holder, which provides plenty of current to run the circuit.
We all know what Computer-Generated Imagery (CGI) is nowadays. It’s almost impossible to get away from it in any television show or movie. It’s gotten so good, that sometimes it can be difficult to tell the difference between the real world and the computer generated world when they are mixed together on-screen. Of course, it wasn’t always like this. This 1982 clip from BBC’s Tomorrow’s World shows what the wonders of CGI were capable of in a simpler time.
In the earliest days of CGI, digital computers weren’t even really a thing. [John Whitney] was an American animator and is widely considered to be the father of computer animation. In the 1940’s, he and his brother [James] started to experiment with what they called “abstract animation”. They pieced together old analog computers and servos to make their own devices that were capable of controlling the motion of lights and lit objects. While this process may be a far cry from the CGI of today, it is still animation performed by a computer. One of [Whitney’s] best known works is the opening title sequence to [Alfred Hitchcock’s] 1958 film, Vertigo.
Later, in 1973, Westworld become the very first feature film to feature CGI. The film was a science fiction western-thriller about amusement park robots that become evil. The studio wanted footage of the robot’s “computer vision” but they would need an expert to get the job done right. They ultimately hired [John Whitney’s] son, [John Whitney Jr] to lead the project. The process first required color separating each frame of the 70mm film because [John Jr] did not have a color scanner. He then used a computer to digitally modify each image to create what we would now recognize as a “pixelated” effect. The computer processing took approximately eight hours for every ten seconds of footage. Continue reading “Retrotechtacular: The Early Days of CGI”→