If the United States has a national architectural form, it is the skyscraper. The notion of building a tower to the heavens is as old as Genesis, but it took some brash 19th century Americans to develop that fanciful idea into tangible, profitable buildings. Although we dressed up our early skyscrapers in Old World styles (the Met Life Tower as an Italian campanile, the Woolworth Building as a French Gothic cathedral), most foreigners agreed that the skyscraper suited only our misfit nation. For decades, Americans were alone in building them. Even those European modernists who dreamed of gleaming towers along Friedrichstraße and Boulevard de Sébastopol had to cross the Atlantic for a chance to act on their ambitions. By the start of World War II, 147 of the 150 tallest habitable buildings on the planet were located in the United States.
No building style better represented America’s industriousness, monomaniacal greed, disregard of tradition, and eagerness to attempt feats that more established cultures considered obscene. And while those indelicate traits prompted Americans to develop the skyscraper, it was our openness and multiculturalism that brought us our greatest skyscraper builder: a Bangladeshi Muslim immigrant named Fazlur Rahman Khan.
Khan was born on April 3rd, 1929 in Dhaka, Bangladesh (Dacca, British India at the time). His father, a mathematics instructor, cultivated young Fazlur’s interest in technical subjects and encouraged him to pursue a degree at Calcutta’s Bengal Engineering College. He excelled in his studies there and, after graduating, won a Fulbright Scholarship that brought him to the University of Illinois. In the United States, Khan studied structural engineering and engineering mechanics, earning two master’s degrees and a PhD in just three years. After a detour in Pakistan, Khan returned to the United States and was hired as an engineer in the Chicago office of Skidmore, Owings & Merrill (SOM), one of the most prominent architecture and engineering firms in the world.
Though he was born in a nation with no history of highrise construction, Dr. Fazlur Rahman Khan had worked his way to a position where he would revolutionize the field of structural engineering and build America’s proudest landmarks.
The pedagogical model of the integrated circuit goes something like this: take a silicone wafer, etch out a few wells, dope some of the silicon with phosphorous, mask some of the chip off, dope some more silicon with boron, and lay down some metal in between everything. That’s an extraordinarily basic model of how the modern semiconductor plant works, but it’s not terribly inaccurate. The conclusion anyone would make after learning this is that chips are inherently three-dimensional devices. But the layers are exceedingly small, and the overall thickness of the active layers of a chip are thinner than a human hair. A bit of study and thought and you’ll realize the structure of an integrated circuit really isn’t in three dimensions.
Recently, rumors and educated guesses coming from silicon insiders have pointed towards true three-dimensional chips as the future of the industry. These chips aren’t a few layers thick like the example above. Instead of just a few dozen layers, 100 or more layers of transistors will be crammed into a single piece of silicon. The reasons for this transition range from shortening the distance signals must travel, reducing resistance (and therefore heat), and optimizing performance and power in a single design.
The ideas that are influencing the current generation of three-dimensional chips aren’t new; these concepts have been around since the beginnings of the semiconductor industry. What is new is how these devices will eventually make it to market, the challenges currently being faced at Intel and other semiconductor companies, and what it will mean for a generation of chips several years down the road.
I keep up with the trends in 3D printing reasonably well. The other day my friend mentioned that filament thickness sensing had been added to the latest version of the Marlin firmware. I had no idea what it was, but it certainly sounded cool. I had to find out more.
In industrial settings, filament is made by pulling extruding molten plastic at a certain speed into a cooling bath. The nozzle for 2.85mm filament and 1.75 mm filament is actually the same size, but the filament is stretched more or less as it leaves the nozzle. By balancing these three variables the extrusion machine can produce any size filament desired. Like any mechanical system, it needs constant adjustment to maintain that balance. This is usually done by measuring the filament with a laser after it has cooled, and then feeding this information back into the system. The better filament manufacturers have multiple lasers and very fast feedback loops. Some of the best offer +-0.04mm or less variation in thickness between any two points on the filament. Some of the worst have larger errors such as +-.10mm. Because the plastic is fed into the extruder at a fixed linear speed, this makes a variation in the volume of the plastic coming out of the nozzle per second. With the best we see a 4.41% variation in the volume of plastic extruded. With the worst we start to see 10.51% or more.
A printer is dumb. It works under the assumption that it is getting absolutely perfect filament. So when it gets 10.51% more plastic, it simply pushes it out and continues with its life. However, if the filament is off enough, this can actually show up as a visible defect on the print. Or in worse cases, cause the print to fail by over or under extrusion of plastic.
So, what does a filament thickness sensor do to correct this issue? To start to understand, we need to look at how the filament is dealt with by the software. When the slicer is compiling the G-code for a 3D print, it calculates the volume of plastic it needs in order to deposit a bead of plastic of a certain width and of a certain height per mm of movement. That was a mouthful. For example, when a printer printing 0.2mm layers moves 1mm it wants to put down a volume that’s 1.0mm long x 0.4mm wide x 0.2mm high. The filament being pushed into the nozzle has a volume per mm determined by the diameter of the filament.
The volume out per mm of filament in.
The equation we are trying to balance.
Our goal is to integrate the thickness sensor into these functions to see what the thickness sensor is doing. This is a linear equation, so there’s nothing fancy here. Now, the layer height, layer width, and length of the move are determined by settings and model geometry respectively. These are fixed numbers so we don’t care about them. That leaves us the diameter of the filament and the length of filament extruded. As we mentioned before, typically the filament is assumed to be a fixed diameter. So all the software has to calculate is the length of filament that needs to be extruded per mm of combined movement in the x and y so that our volumes match.
But, we know that one of these variables is actually changing per millimeter as well. The filament diameter! So now we have a problem. If the filament diameter is changing all the time, our equation will never balance! In order to fix this we can add a multiplier to our equation. Since we have no control over the width of the filament we can’t modify that value. However, if we know the width of the filament, and we know the value its supposed to be, we can change the length of the filament extruded. This is because unlike the filament, we have control over the stepper motor that drives the extruder. This value is called the extrusion multiplier, and its determination is what the thickness sensor is all about.
So all the filament sensor does is measure the filament’s current diameter. It takes expected diameter and divides it by the value it just measured to get a simple percentage. It feeds that number back into our system as the extruder multiplier and slows or speeds up the stepper motor as needed. Pretty simple.
The ideal filament the printer thinks it is seeing.
The printer is unable to compensate for the variations.
By adjusting with the extrusion multiplier the printer is able to approximate perfect filament.
There are a few thickness sensors being toyed with right now. The first, as far as I can tell; let me know if I am wrong in the comments, was by [flipper] on thingiverse. He is in his third version now. The sensor works by casting a shadow of the filament as it passes by onto an optical sensor. The firmware then counts the pixels and works backwards to get the diameter. This value is sent to the Marlin firmware on the printer which does the rest. As is usual and wonderful in the open source community, it wasn’t long before others started working on the problem too. [inoranate] improved on the idea by casting more shadows on the sensor. The technique is still brand new, but it will be interesting to see what benefits it reaps.
Now comes the next question,”Is it worth upgrading my printer with a thickness sensor?” If you typically run poor filament, or if you extrude your own, yes. The current sensors can only measure +- .02mm. So for the best filament, you won’t really see a difference, but for worse stuff, you might. The latest firmware of the Lyman filament extruder, for making your own filament, also supports these sensors, letting you feed back into your production system like the industrial machines. All in all a very interesting development in the world of 3D printers.
My DSL line downloads at 6 megabits per second. I just ran the test. This is over a pair of copper twisted wires, the same Plain Old Telephone Service (POTS) twisted pair that connected your Grandmother’s phone to the rest of the world. In fact, if you had that phone you could connect and use it today.
I can remember the old 110 bps acoustic coupler modems. Maybe some of you can also. Do you remember upgrading to 300 bps? Wow! Triple the speed. Gradually the speed increased through 1200 to 2400, and then finally, 56.6k. All over the same of wires. Now we feel short changed if were not getting multiple megabits from DSL over that same POTS line. How can we get such speeds over a system that still allows your grandmother’s phone to be connected and dialed? How did the engineers know these increased speeds were possible?
The answer lies back in 1948 with Dr. Claude Shannon who wrote a seminal paper, “A Mathematical Theory of Communication”. In that paper he laid the groundwork for Information Theory. Shannon also is recognized for applying Boolean algebra, developed by George Boole, to electrical circuits. Shannon recognized that switches, at that time, and today’s logic circuits followed the rules of Boolean Algebra. This was his Master’s Thesis written in 1937.
Shannon’s Theory of Communications explains how much information you can send through a communications channel at a specified error rate. In summary, the theory says:
There is a maximum channel capacity, C,
If the rate of transmission, R, is less than C, information can be transferred at a selected small error probability using smart coding techniques,
The coding techniques require intelligent encoding techniques with longer blocks of signal data.
What the theory doesn’t provide is information on the smart coding techniques. The theory says you can do it, but not how.
In this article I’m going to describe this work without getting into the mathematics of the derivations. In another article I’ll discuss some of the smart coding techniques used to approach channel capacity. If you can understand the mathematics, here is the first part of the paper as published in the Bell System Technical Journal in July 1948 and the remainder published later that year. To walk though the system used to fit so much information on a twisted copper pair, keep reading.
If you read my first post about a simple CPLD do-it-yourself project you may remember that I seriously wiffed when I made the footprint 1” wide, which was a bit too wide for common solderless breadboards. Since then I started over, having fixed the width problem, and ended up with a module that looks decidedly… cuter.
To back up a little bit, a Complex Programmable Logic Device (CPLD) is a cool piece of hardware to have in your repertoire and it can be used to learn logic or a high level design language or replace obsolete functions or chips. But a CPLD needs a little bit of support infrastructure to become usable, and that’s what I’ll be walking you through here. So if you’re interested in learning CPLDs, or just designing boards for them, read on!
We live in a world transformed by our ability to manipulate the nucleus of atoms. Nuclear power plants provide abundant energy without polluting the air, yet on the other hand thousands of nuclear warheads sit in multiple countries ready to annihilate everything, even if it’s not on purpose. There are an uncountable number of other ways that humanity’s dive into nuclear chemistry has impacted the lives of people across the world, from medical imaging equipment to smoke detectors and even, surprisingly, to some of the food that we eat.
After World War 2, there was a push to find peaceful uses for atomic energy. After all, dropping two nuclear weapons on a civilian population isn’t great PR and there’s still a debate on whether or not their use was justified. Either way, however, the search was on to find other uses for atomic energy besides bombs. While most scientists turned their attention to creating a viable nuclear power station (the first of which would only come online in 1954, almost ten years after the end of World War 2), a few scientists turned their attention to something much less obvious: plants.
Legend has it that Henry Ford would send engineers out to junkyards all over the US looking for Fords. They were supposed to study each one they found and make note of any parts that had not failed. But it wasn’t so that he could start making all of those parts stronger. Instead, Ford allegedly used this data to determine where he could cut corners in future production runs so as not to waste money by making any part last longer than any other part.
Most things tend to break down rather than completely giving out. Usually it’s only one or two components that stop working and the rest of it is still serviceable. And this is a good thing. It’s what lets us repair PCBs or scavenge parts off them, drive our cars longer, and help save each other’s lives through organ donor programs. Can you imagine how different life would be if each part of every thing failed at the same time?