How Duck Tape Became Famous

If you hack things in the real world, you probably have one or more rolls of duck tape. Outside of the cute brand name, many people think that duck tape is a malapropism, but in truth it is the type of cloth traditionally used in our favorite tape: cotton duck. However, as we’ll see, it’s not entirely wrong to call it duct tape either. Whatever you call it, a cloth material has an adhesive backing and is coated with something like polyethylene.

Actually, the original duck tape wasn’t adhesive at all. It was simply strips of cotton duck used for several purposes, including making shoes and wrapping steel cables like the ones placed in 1902 at the Manhattan Bridge. By 1910, the tape was made with adhesive on one side and soaked in rubber, found use in hospitals for binding wounds. In May 1930, Popular Mechanics advised melting rubber from an old tire and adding rosin to create a compound to coat cotton tape, among other things.

Continue reading “How Duck Tape Became Famous”

a modern car dipped into a chemical bath for electrodeposition adding a phosphate layer

Watching Paint Dry For Over 100 Years

A Model T Ford customer could famously get their car “in any color he wants, so long as it’s black.” Thus begins [edconway]’s recounting of the incremental improvements in car paint and its surprising role in mass production, marketing, and longevity of automobiles.

In it, we learn that the aforementioned black paint from Ford had so much asphalt in it that black was the only color that would work. Not to go down a This Is Spinal Tap rabbit hole, but there were several kinds of black on those Model Ts. Over 30 of them were used for various purposes. The paints also dried in different ways. While the assembly only took 12 hours, the paint drying time took days, even weeks backing up production and begging for innovation. [edconway] then fast-forwards to an era of “conspicuous consumption and ‘planned obsolescence’” with DuPont’s invention of Duco that brought color to the world of automobiles.

edconway graph of paint drying time by year

See the article for the real story of advances in paint technology and drying time. Paint application technology has also steadily improved over the years, so we recommend diving in to get the century’s long story.

Oscillon by Ben F. Laposky

Early Computer Art From The 1950s And 1960s

Modern day computer artist, [Amy Goodchild] surveys a history of Early Computer Art from the 1950s and 1960s. With so much attention presently focused on AI-generated artwork, we should remember that computers have been used to created art for many decades.

Our story begins in 1950 when Ben Laposky started using long exposure photography of cathode ray oscilloscopes to record moving signals generated by electronic circuits. In 1953, Gordon Pask developed the electromechanical MusiColor system. MusiColor empowered musicians to control visual elements including lights, patterns, and motorized color wheels using sound from their instruments. The musicians could interact with the system in real-time, audio-visual jam sessions.

In the early 1960s, BEFLIX (derived form Bell Flix) was developed by Ken Knowlton at Bell Labs as a programming language for generating video animations. The Graphic 1 computer featuring a light pen input device was also developed at Bell Labs. Around the same timeframe, IBM introduced novel visualization technology in the IBM 2250 graphics display for its System/360 computer. The 1967 IBM promotional film Frontiers in Computer Graphics demonstrates the capabilities of the system.

Continue reading “Early Computer Art From The 1950s And 1960s”

Scorched Moon: Secret Project A119

In today’s world, it is hard to realize how frightened Americans were at the news of Sputnik orbiting the Earth. Part of it was a fear of what a rival nation could do if they could fly over your country with impunity. Part of it was simply fear generated by propaganda. While America won the race to the moon, that wasn’t clear in the 1950s. The Soviet Union was ahead in the ability to deliver bombs using planes and missiles. They launched Sputnik on a modified ICBM, while American attempts to do the same failed spectacularly. The Air Force wanted ideas about how to respond to Sputnik, and one of the most disturbing ones was project A119, a project we were reminded of recently by a BBC post.

In all fairness, the Soviets had an almost identical plan, code-named E4. Fortunately, both sides eventually realized these plans weren’t a good idea. Oh, did we forget to mention that A119 and E4 were plans to detonate a nuclear device on the moon?

Continue reading “Scorched Moon: Secret Project A119”

A History Of NASA Supercomputers, Among Others

The History Guy on YouTube has posted an interesting video on the history of the supercomputer, with a specific focus on their use by NASA for the implementation of computational fluid dynamics (CFD) models of aeronautical assemblies.

The aero designers of the day were quickly finding out the limitations of the wind tunnel testing approach, especially for so-called transonic flow conditions. This occurs when an object moving through a fluid (like air can be modeled) produces regions of supersonic flow mixed in with subsonic flow and makes for additional drag scenarios. This severely impacts aircraft performance. Not accounting for these effects is not an option, hence the great industry interest in CFD modeling. But the equations for which (usually based around the Navier-Stokes system) are non-linear, and extremely computationally intensive.

Obviously, a certain Mr. Cray is a prominent player in this story, who, as the story goes, exhausted the financial tolerance of his employer, CDC, and subsequently formed Cray Research Inc, and the rest is (an interesting) history. Many Cray machines were instrumental in the development of the space program, and now adorn computing museums the world over. You simply haven’t lived until you’ve sipped your weak lemon drink whilst sitting on the ‘bench’ around an early Cray machine.

You see, supercomputers are a different beast from those machines mere mortals have access to, or at least the earlier ones were. The focus is on pure performance, ideally for floating-point computation, with cost far less of a concern, than getting to the next computational milestone. The Cray-1 for example, is a 64-bit machine capable of 80 MIPS scalar performance (whilst eating over 100 kW of juice), and some very limited parallel processing ability.

While this was immensely faster than anything else available at the time, the modern approach to supercomputing is less about fancy processor design and more about the massive use of parallelism of existing chips with lots of local fast storage mixed in. Every hacker out there should experience these old machines if they can, because the tricks they used and the lengths the designers went to get squeeze out every ounce of processing grunt, can be a real eye-opener.

Want to see what happens when you really push out the boat and use the whole wafer for parallel computation? Checkout the Cerberus. If your needs are somewhat less, but dabbling in parallel computing gets you all pumped, you could build a small array out of Pine64s. Finally, the story wouldn’t be complete without talking about the life and sad early demise of Seymour Cray.
Continue reading “A History Of NASA Supercomputers, Among Others”

Riding The Rails By Ebike

As most developed countries around the world continue to modernize their transportation infrastructure with passenger rail, countries in North America have been abandoning railroads for over a century now, assuming that just one more lane will finally solve their traffic problems. Essentially the only upside to the abandonment of railroads has been that it’s possible to build some unique vehicles to explore these tracks and the beautiful yet desolate areas they reach, and [Cam Engineering] is using an ebike to do that along the coast of central California.

Continue reading “Riding The Rails By Ebike”

Vintage Tektronix Virtual Graticule

Oscilloscopes are great for measuring the time and voltage information of a signal. Some old scopes don’t have much in the way of markings on the CRT, although eventually, we started seeing scales that allowed you to count squares easily. Early scopes had marks on the glass or plastic over the CRT, but as [Vintage TEK Museum] points out, this meant for best accuracy, you had to look directly at the CRT. If you were at an angle horizontally or vertically, the position of the trace would appear to move concerning the lines on the screen. You can see the effect in the video below.

The simple solution was to mark directly into the phosphor, which minimized the effect. Before that was possible, [Bob Anderson] invented a clever solution, although Tektronix didn’t produce any scopes using it for some reason. The idea was the virtual oscilloscope graticule, and it was quite clever.

The idea was to put the graticule on a semi-reflective mirror. Looking through the assembly, you would actually see the trace and the reflection of the graticule in the mirror. The resulting image is perfectly aligned if the assembly is constructed properly. You can, at some angles, see both the front and reflected graticules.

According to the video, management was not impressed because someone other than [Anderson] showed a poor-quality prototype to them. By 1962, the graticule in the phosphor took over, and there was no need for [Anderson’s] clever invention.

These days, a graticule is just bits on the screen. Even if you roll your own.

Continue reading “Vintage Tektronix Virtual Graticule”