Mechanisms: Ode To The Zipper

Look around yourself right now and chances are pretty good that you’ll quickly lay eyes on a zipper. Zippers are incredibly commonplace artifacts, a commodity item produced by the mile that we rarely give a second thought to until they break or get stuck. But zippers are a fairly modern convenience, and the story of their invention is one that shows even the best ideas can be delayed by overly complicated designs and lack of a practical method for manufacturing.

Try and Try Again

US Patent #504,307. One of the many iterations of Judson’s design. Like the others, it didn’t work.

Ideas for fasteners to replace buttons and laces have been kicking around since the mid-19th century. The first patent for a zipper-like fastener was issued to Elias Howe, inventor of the sewing machine. Though he was no slouch at engineering intricate mechanisms, Howe was never able to make his “Automatic, Continuous Clothing Closure” a workable product, and Howe shifted his inventive energies to other projects.

The world would wait another forty years for further development of a hookless fastener, when a Chicago-born inventor of little prior success named Whitcomb Judson began work on a “Clasp Locker or Unlocker.” Intended for the shoe and boot market, Judson’s device has all the recognizable parts of a modern zipper — rows of interlocking teeth with a slide mechanism to mesh and unmesh the two sides. The device was debuted at the Chicago World’s Fair in 1893 and was met with almost no commercial interest.

Judson went through several iterations of designs for his clasp locker, looking for the right combination of ideas that would result in a workable fastener that was easy enough to manufacture profitably. He lined up backers, formed a company, and marketed various versions of his improved products. But everything he tried seemed to have one or more serious drawbacks. When his fasteners were used in shoes, unexpected failure was a mere inconvenience. If a fastener on a lady’s dress opened unexpectedly, it could have been a social catastrophe. Coupled with a price tag that was exorbitantly high to cover the manual labor needed to assemble them, almost every version of Judson’s invention flopped.

Zipping up. Source: Dominique Toussaint (Wikipedia)

It would take another decade, a change of company name, a cross-country move, and the hiring of a bright young engineer before the world would have what we would recognize as the first modern zipper. Judson hired Gideon Sundback in 1901, and by 1913 he was head designer at the Fastener Manufacturing and Machine Company, newly relocated to Meadville, Pennsylvania after a stop in Hoboken, New Jersey. Sundback’s design called for rows of identical teeth with cups on the underside and nibs on the upper, set on fabric tapes. A slide with a Y-shaped channel bent the tapes to open the gap between teeth, allowing the cups to nest on the nibs and mesh the teeth together strongly.

Sundback’s design had significant advantages over any of Judson’s attempts. First, it worked, and it was reliable enough to start quickly making inroads into fashionable apparel beyond its initial marketing toward more utilitarian products like tobacco pouches. Secondly, and perhaps more importantly, Sundback invented machinery that could make hundreds of feet of the fasteners in a day. This gave the invention an economy of scale that none of Judson’s fasteners could ever have achieved.

Putting Some Teeth into It

Continuous process for forming metal zippers. Source: How Products Are Made

The machinery that Sundback invented to make his “Separable Fastener” has been much improved since the early 1900s, but the current process still looks similar, at least for metal zippers. Stringers, which are the fabric tapes with teeth attached, are formed in a continuous process by a multi-step punching and crimping machine. For metal stringers, a coil of flat metal is fed into a punch and die to form hollow scoops. The strip is then punched again to form a Y-shape around the scoop and cut it free from the web. The legs of the Y straddle the edge of the fabric tape, and a set of dies then crimps the legs to the tape. A modern zipper machine can make stringers at a rate of 2000 teeth per minute.

Plastic zippers are common these days, too, and manufacturing methods vary by zipper style. One method has the fabric tapes squeezed between the halves of a die while teeth are injection molded around the tape to form two parallel stringers. A sprue connected the stringers by the teeth breaks free after molding, and the completed stringers are assembled later.

Zippers have come a long way since Sundback’s first successful design, with manufacturing improvements that have eliminated many of the manual operations once required. Specialized zippers have made it from the depths of the oceans to the surface of the Moon, and chances are pretty good that if we ever get to Mars, one way or another, zippers will go with us.

Gertrude Elion, DNA Hacker

Some people become scientists because they have an insatiable sense of curiosity. For others, the interest is born of tragedy—they lose a loved one to disease and are driven to find a cure. In the case of Gertrude Elion, both are true. Gertrude was a brilliant and curious student who could have done anything given her aptitude. But when she lost her grandfather to cancer, her path became clear.

As a biochemist and pharmacologist for what is now GlaxoSmithKline, Gertrude and Dr. George Hitchings created many different types of drugs by synthesizing natural nucleic compounds in order to bait pathogens and kill them. Their unorthodox, designer drug method led them to create the first successful anti-cancer drugs and won them a Nobel Prize in 1988.

Continue reading “Gertrude Elion, DNA Hacker”

Quantum Weirdness In Your Browser

I’ll be brutally honest. When I set out to write this post, I was going to talk about IBM’s Q Experience — the website where you can run real code on some older IBM quantum computing hardware. I am going to get to that — I promise — but that’s going to have to wait for another time. It turns out that quantum computing is mindbending and — to make matters worse — there are a lot of oversimplifications floating around that make it even harder to understand than it ought to be. Because the IBM system matches up with real hardware, it is has a lot more limitations than a simulator — think of programming a microcontroller with on debugging versus using a software emulator. You can zoom into any level of detail with the emulator but with the bare micro you can toggle a line, use a scope, and hope things don’t go too far wrong.

So before we get to the real quantum hardware, I am going to show you a simulator written by [Craig Gidney]. He wrote it and promptly got a job with Google, who took over the project. Sort of. Even if you don’t like working in a browser, [Craig’s] simulator is easy enough, you don’t need an account, and a bookmark will save your work.

It isn’t the only available simulator, but as [Craig] immodestly (but correctly) points out, his simulator is much better than IBM’s. Starting with the simulator avoids tripping on the hardware limitations. For example, IBM’s devices are not fully connected, like a CPU where only some registers can get to other registers. In addition, real devices have to deal with noise and the quantum states not lasting very long. If your algorithm is too slow, your program will collapse and invalidate your results. These aren’t issues on a simulator. You can find a list of other simulators, but I’m focusing on Quirk.

What Quantum Computing Is

As I mentioned, there is a lot of misinformation about quantum computing (QC) floating around. I think part of it revolves around the word computing. If you are old enough to remember analog computers, QC is much more like that. You build “circuits” to create results. There’s also a lot of difficult math — mostly linear algebra — that I’m going to try to avoid as much as possible. However, if you can dig into the math, it is worth your time to do so. However, just like you can design a resonant circuit without solving differential equations about inductors, I think you can do QC without some of the bigger math by just using results. We’ll see how well that holds up in practice.

Continue reading “Quantum Weirdness In Your Browser”

Quantum Computing Hardware Teardown

Although quantum computing is still in its infancy, enough progress is being made for it to look a little more promising than other “revolutionary” technologies, like fusion power or flying cars. IBM, Intel, and Google all either operate or are producing double-digit qubit computers right now, and there are plans for even larger quantum computers in the future. With this amount of inertia, our quantum computing revolution seems almost certain.

There’s still a lot of work to be done, though, before all of our encryption is rendered moot by these new devices. Since nothing is easy (or intuitive) at the quantum level, progress has been considerably slower than it was during the transistor revolution of the previous century. These computers work because of two phenomena: superposition and entanglement. A quantum bit, or qubit, works because unlike a transistor it can exist in multiple states at once, rather than just “zero” or “one”. These states are difficult to determine because in general a qubit is built using a single atom. Adding to the complexity, quantum computers must utilize quantum entanglement too, whereby a pair of particles are linked. This is the only way for any hardware to “observe” the state of the computer without affecting any qubits themselves. In fact, the observations often don’t yet have the highest accuracy themselves.

There are some other challenges with the hardware as well. All quantum computers that exist today must be cooled to a temperature very close to absolute zero in order to take advantage of superconductivity. Whether this is because of a reduction in thermal noise, as is the case with universal quantum computers based on ion traps or other technology, or because it is possible to take advantage of other interesting characteristics of superconductivity like the D-Wave computers do, all of them must be cooled to a critical temperature. A further challenge is that even at these low temperatures, the qubits still interact with each other and their read/write devices in unpredictable ways that get more unpredictable as the number of qubits scales up.

So, once the physics and the refrigeration are sorted out, let’s take a look at how a few of the quantum computing technologies actually manipulate these quantum curiosities to come up with working, programmable computers. Continue reading “Quantum Computing Hardware Teardown”

Space Escape: Flying A Chair To Lunar Orbit

In the coming decades, mankind will walk on the moon once again. Right now, plans are being formulated for space stations orbiting around Lagrange points, surveys of lava tubes are being conducted, and slowly but surely plans are being formed to build the hardware that will become a small scientific outpost on our closest celestial neighbor.

This has all happened before, of course. In the early days of the Apollo program, there were plans to launch two Saturn V rockets for every moon landing, one topped with a command module and three astronauts, the other one containing an unmanned ‘LM Truck’. This second vehicle would land on the moon with all the supplies and shelter for a 14-day mission. There would be a pressurized lunar rover weighing thousands of pounds. This wouldn’t exactly be a Lunar colony, instead, it would be more like a small cabin in the Arctic used as a scientific outpost. Astronauts and scientists would land, spend two weeks researching and exploring, and return to Earth with hundreds of pounds of samples.

With this, as with all Apollo landings, came a risk. What would happen if the ascent engine didn’t light? Apart from a beautiful speech written by William Safire, there was nothing concrete for astronauts consigned to the deepest of the deep. Later in the Apollo program, there was a plan for real hardware to bring stranded astronauts home. This was the Lunar Escape System (LESS), basically two chairs mounted to a rocket engine.

While the LESS was never built, several studies were completed in late 1970 by North American Rockwell detailing the hardware that would return two astronauts from the surface of the moon. It involved siphoning fuel from a stricken Lunar Module, flying to orbit with no computer or really any instrumentation at all, and performing a rendezvous with an orbiting Command Module in less than one Lunar orbit.

Continue reading “Space Escape: Flying A Chair To Lunar Orbit”

Know Your Video Waveform

When you acquired your first oscilloscope, what were the first waveforms you had a look at with it? The calibration output, and maybe your signal generator. Then if you are like me, you probably went hunting round your bench to find a more interesting waveform or two. In my case that led me to a TV tuner and IF strip, and my first glimpse of a video signal.

An analogue video signal may be something that is a little less ubiquitous in these days of LCD screens and HDMI connectors, but it remains a fascinating subject and one whose intricacies are still worthwhile knowing. Perhaps your desktop computer no longer drives a composite monitor, but a video signal is still a handy way to add a display to many low-powered microcontroller boards. When you see Arduinos and ESP8266s producing colour composite video on hardware never intended for the purpose you may begin to understand why an in-depth knowledge of a video waveform can be useful to have.

The purpose of a video signal is to both convey the picture information in the form of luminiance and chrominance (light & dark, and colour), and all the information required to keep the display in complete synchronisation with the source. It must do this with accurate and consistent timing, and because it is a technology with roots in the early 20th century all the information it contains must be retrievable with the consumer electronic components of that time.

We’ll now take a look at the waveform and in particular its timing in detail, and try to convey some of its ways. You will be aware that there are different TV systems such as PAL and NTSC which each have their own tightly-defined timings, however for most of this article we will be treating all systems as more-or-less identical because they work in a sufficiently similar manner.

Continue reading “Know Your Video Waveform”

Neural Networking: Robots Learning From Video

Humans are very good at watching others and imitating what they do. Show someone a video of flipping a switch to turn on a CNC machine and after a single viewing they’ll be able to do it themselves. But can a robot do the same?

Bear in mind that we want the demonstration video to be of a human arm and hand flipping the switch. When the robot does it, the camera that is its eye will be seeing its robot arm and gripper. So somehow it’ll have to know that its robot parts are equivalent to the human parts in the demonstration video. Oh, and the switch in the demonstration video may be a different model and make, and the CNC machine may be a different one, though we’ll at least put the robot within reach of its switch.

Sound difficult?

Researchers from Google Brain and the University of Southern California have done it. In their paper describing how, they talk about a few different experiments but we’ll focus on just one, getting a robot to imitate pouring a liquid from a container into a cup.

Continue reading “Neural Networking: Robots Learning From Video”