Gertrude Elion, DNA Hacker

Some people become scientists because they have an insatiable sense of curiosity. For others, the interest is born of tragedy—they lose a loved one to disease and are driven to find a cure. In the case of Gertrude Elion, both are true. Gertrude was a brilliant and curious student who could have done anything given her aptitude. But when she lost her grandfather to cancer, her path became clear.

As a biochemist and pharmacologist for what is now GlaxoSmithKline, Gertrude and Dr. George Hitchings created many different types of drugs by synthesizing natural nucleic compounds in order to bait pathogens and kill them. Their unorthodox, designer drug method led them to create the first successful anti-cancer drugs and won them a Nobel Prize in 1988.

Continue reading “Gertrude Elion, DNA Hacker”

Quantum Weirdness In Your Browser

I’ll be brutally honest. When I set out to write this post, I was going to talk about IBM’s Q Experience — the website where you can run real code on some older IBM quantum computing hardware. I am going to get to that — I promise — but that’s going to have to wait for another time. It turns out that quantum computing is mindbending and — to make matters worse — there are a lot of oversimplifications floating around that make it even harder to understand than it ought to be. Because the IBM system matches up with real hardware, it is has a lot more limitations than a simulator — think of programming a microcontroller with on debugging versus using a software emulator. You can zoom into any level of detail with the emulator but with the bare micro you can toggle a line, use a scope, and hope things don’t go too far wrong.

So before we get to the real quantum hardware, I am going to show you a simulator written by [Craig Gidney]. He wrote it and promptly got a job with Google, who took over the project. Sort of. Even if you don’t like working in a browser, [Craig’s] simulator is easy enough, you don’t need an account, and a bookmark will save your work.

It isn’t the only available simulator, but as [Craig] immodestly (but correctly) points out, his simulator is much better than IBM’s. Starting with the simulator avoids tripping on the hardware limitations. For example, IBM’s devices are not fully connected, like a CPU where only some registers can get to other registers. In addition, real devices have to deal with noise and the quantum states not lasting very long. If your algorithm is too slow, your program will collapse and invalidate your results. These aren’t issues on a simulator. You can find a list of other simulators, but I’m focusing on Quirk.

What Quantum Computing Is

As I mentioned, there is a lot of misinformation about quantum computing (QC) floating around. I think part of it revolves around the word computing. If you are old enough to remember analog computers, QC is much more like that. You build “circuits” to create results. There’s also a lot of difficult math — mostly linear algebra — that I’m going to try to avoid as much as possible. However, if you can dig into the math, it is worth your time to do so. However, just like you can design a resonant circuit without solving differential equations about inductors, I think you can do QC without some of the bigger math by just using results. We’ll see how well that holds up in practice.

Continue reading “Quantum Weirdness In Your Browser”

Quantum Computing Hardware Teardown

Although quantum computing is still in its infancy, enough progress is being made for it to look a little more promising than other “revolutionary” technologies, like fusion power or flying cars. IBM, Intel, and Google all either operate or are producing double-digit qubit computers right now, and there are plans for even larger quantum computers in the future. With this amount of inertia, our quantum computing revolution seems almost certain.

There’s still a lot of work to be done, though, before all of our encryption is rendered moot by these new devices. Since nothing is easy (or intuitive) at the quantum level, progress has been considerably slower than it was during the transistor revolution of the previous century. These computers work because of two phenomena: superposition and entanglement. A quantum bit, or qubit, works because unlike a transistor it can exist in multiple states at once, rather than just “zero” or “one”. These states are difficult to determine because in general a qubit is built using a single atom. Adding to the complexity, quantum computers must utilize quantum entanglement too, whereby a pair of particles are linked. This is the only way for any hardware to “observe” the state of the computer without affecting any qubits themselves. In fact, the observations often don’t yet have the highest accuracy themselves.

There are some other challenges with the hardware as well. All quantum computers that exist today must be cooled to a temperature very close to absolute zero in order to take advantage of superconductivity. Whether this is because of a reduction in thermal noise, as is the case with universal quantum computers based on ion traps or other technology, or because it is possible to take advantage of other interesting characteristics of superconductivity like the D-Wave computers do, all of them must be cooled to a critical temperature. A further challenge is that even at these low temperatures, the qubits still interact with each other and their read/write devices in unpredictable ways that get more unpredictable as the number of qubits scales up.

So, once the physics and the refrigeration are sorted out, let’s take a look at how a few of the quantum computing technologies actually manipulate these quantum curiosities to come up with working, programmable computers. Continue reading “Quantum Computing Hardware Teardown”

Space Escape: Flying A Chair To Lunar Orbit

In the coming decades, mankind will walk on the moon once again. Right now, plans are being formulated for space stations orbiting around Lagrange points, surveys of lava tubes are being conducted, and slowly but surely plans are being formed to build the hardware that will become a small scientific outpost on our closest celestial neighbor.

This has all happened before, of course. In the early days of the Apollo program, there were plans to launch two Saturn V rockets for every moon landing, one topped with a command module and three astronauts, the other one containing an unmanned ‘LM Truck’. This second vehicle would land on the moon with all the supplies and shelter for a 14-day mission. There would be a pressurized lunar rover weighing thousands of pounds. This wouldn’t exactly be a Lunar colony, instead, it would be more like a small cabin in the Arctic used as a scientific outpost. Astronauts and scientists would land, spend two weeks researching and exploring, and return to Earth with hundreds of pounds of samples.

With this, as with all Apollo landings, came a risk. What would happen if the ascent engine didn’t light? Apart from a beautiful speech written by William Safire, there was nothing concrete for astronauts consigned to the deepest of the deep. Later in the Apollo program, there was a plan for real hardware to bring stranded astronauts home. This was the Lunar Escape System (LESS), basically two chairs mounted to a rocket engine.

While the LESS was never built, several studies were completed in late 1970 by North American Rockwell detailing the hardware that would return two astronauts from the surface of the moon. It involved siphoning fuel from a stricken Lunar Module, flying to orbit with no computer or really any instrumentation at all, and performing a rendezvous with an orbiting Command Module in less than one Lunar orbit.

Continue reading “Space Escape: Flying A Chair To Lunar Orbit”

Know Your Video Waveform

When you acquired your first oscilloscope, what were the first waveforms you had a look at with it? The calibration output, and maybe your signal generator. Then if you are like me, you probably went hunting round your bench to find a more interesting waveform or two. In my case that led me to a TV tuner and IF strip, and my first glimpse of a video signal.

An analogue video signal may be something that is a little less ubiquitous in these days of LCD screens and HDMI connectors, but it remains a fascinating subject and one whose intricacies are still worthwhile knowing. Perhaps your desktop computer no longer drives a composite monitor, but a video signal is still a handy way to add a display to many low-powered microcontroller boards. When you see Arduinos and ESP8266s producing colour composite video on hardware never intended for the purpose you may begin to understand why an in-depth knowledge of a video waveform can be useful to have.

The purpose of a video signal is to both convey the picture information in the form of luminiance and chrominance (light & dark, and colour), and all the information required to keep the display in complete synchronisation with the source. It must do this with accurate and consistent timing, and because it is a technology with roots in the early 20th century all the information it contains must be retrievable with the consumer electronic components of that time.

We’ll now take a look at the waveform and in particular its timing in detail, and try to convey some of its ways. You will be aware that there are different TV systems such as PAL and NTSC which each have their own tightly-defined timings, however for most of this article we will be treating all systems as more-or-less identical because they work in a sufficiently similar manner.

Continue reading “Know Your Video Waveform”

Neural Networking: Robots Learning From Video

Humans are very good at watching others and imitating what they do. Show someone a video of flipping a switch to turn on a CNC machine and after a single viewing they’ll be able to do it themselves. But can a robot do the same?

Bear in mind that we want the demonstration video to be of a human arm and hand flipping the switch. When the robot does it, the camera that is its eye will be seeing its robot arm and gripper. So somehow it’ll have to know that its robot parts are equivalent to the human parts in the demonstration video. Oh, and the switch in the demonstration video may be a different model and make, and the CNC machine may be a different one, though we’ll at least put the robot within reach of its switch.

Sound difficult?

Researchers from Google Brain and the University of Southern California have done it. In their paper describing how, they talk about a few different experiments but we’ll focus on just one, getting a robot to imitate pouring a liquid from a container into a cup.

Continue reading “Neural Networking: Robots Learning From Video”

Confessions Of A Reformed Frequency Standard Nut

Do you remember your first instrument, the first device you used to measure something? Perhaps it was a ruler at primary school, and you were taught to see distance in terms of centimetres or inches. Before too long you learned that these units are only useful for the roughest of jobs, and graduated to millimetres, or sixteenths of an inch. Eventually as you grew older you would have been introduced to the Vernier caliper and the micrometer screw gauge, and suddenly fractions of a millimetre, or thousandths of an inch became your currency.  There is a seduction to measurement, something that draws you in until it becomes an obsession.

Every field has its obsessives, and maybe there are bakers seeking the perfect cup of flour somewhere out there, but those in our community will probably focus on quantities like time and frequency. You will know them by their benches surrounded by frequency standards and atomic clocks, and their constant talk of parts per billion, and of calibration. I can speak with authority on this matter, for I used to be one of them in a small way; I am a reformed frequency standard nut. Continue reading “Confessions Of A Reformed Frequency Standard Nut”