Doesn’t the Z-axis on 3D-printers seem a little – underused? I mean, all it does is creep up a fraction of a millimeter as the printer works through each slice. It would be nice if it could work with the other two axes and actually do something interesting. Which is exactly what’s happening in the nonplanar 3D-printing methods being explored at the University of Hamburg. Printing proceeds normally up until the end, when some modifications to Slic3r allow smooth toolpaths to fill in the stairsteps and produce a smooth(er) finish. It obviously won’t work for all prints or printers, but it’s nice to see the Z-axis finally pulling its weight.
If you want to know how something breaks, best to talk to someone who looks inside broken stuff for a living. [Roger Cicala] from LensRentals.com spends a lot of time doing just that, and he has come to some interesting conclusions about how electronics gear breaks. For his money, the prime culprit in camera and lens breakdowns is side-mounted buttons and jacks. The reason why is obvious once you think about it: components mounted perpendicular to the force needed to operate them are subject to a torque. That’s a problem when the only thing holding the component to the board is a few SMD solder pads. He covers some other interesting failure modes, too, and the whole article is worth a read to learn how not to design a robust product.
In the seemingly neverending quest to build the world’s worst Bitcoin mining rig, behold the 8BitCoin. It uses the 6502 processor in an Apple ][ to perform the necessary hashes, and it took a bit of doing to port the 32-bit SHA256 routines to an 8-bit platform. But therein lies the hack. But what about performance? Something something heat death of the universe…
Contributing Editor [Tom Nardi] dropped a tip about a new online magazine for people like us. Dubbed Paged Out!, the online quarterly ‘zine is a collection of contributed stories from hackers, programmers, retrocomputing buffs, and pretty much anyone with something to say. Each article is one page and is formatted however the author wants to, which leads to some interesting layouts. You can check out the current issue here; they’re still looking for a bunch of articles for the next issue, so maybe consider writing up something for them – after you put it on Hackaday.io, of course.
Tipline stalwart [Qes] let us know about an interesting development in semiconductor manufacturing. Rather than concentrating on making transistors smaller, a team at Tufts University is making transistors from threads. Not threads of silicon, or quantum threads, or threads as a metaphor for something small and high-tech. Actual threads, like for sewing. Of course, there’s plenty more involved, like carbon nanotubes — hey, it was either that or graphene, right? — gold wires, and something called an ionogel that holds the whole thing together in a blob of electrolyte. The idea is to remove all rigid components and make truly flexible circuits. The possibilities for wearable sensors could be endless.
And finally, here’s a neat design for an ergonomic utility knife. It’s from our friend [Eric Strebel], an industrial designer who has been teaching us all a lot about his field through his YouTube channel. This knife is a minimalist affair, designed for those times when you need more than an X-Acto but a full utility knife is prohibitively bulky. [Eric’s] design is a simple 3D-printed clamshell that holds a standard utility knife blade firmly while providing good grip thanks to thoughtfully positioned finger depressions. We always get a kick out of watching [Eric] design little widgets like these; there’s a lot to learn from watching his design process.
A factory is a machine. It takes a fixed set of inputs – circuit boards, plastic enclosures, optimism – and produces a fixed set of outputs in the form of assembled products. Sometimes it is comprised of real machines (see any recent video of a Tesla assembly line) but more often it’s a mixture of mechanical machines and meaty humans working together. Regardless of the exact balance the factory machine is conceived of by a production engineer and goes through the same design, iteration, polish cycle that the rest of the product does (in this sense product development is somewhat fractal). Last year [Michael Ossmann] had a surprise production problem which is both a chilling tale of a nasty hardware bug and a great reminder of how fragile manufacturing can be. It’s a natural fit for this year’s theme of going to production.
The saga begins with [Michael] receiving an urgent message from the factory that an existing product which had been in production for years was failing at such a high rate that they had stopped the production line. There are few worse notes to get from a factory! The issue was apparently “failure to program” and Great Scott Gadgets immediately requested samples from their manufacturer to debug. What follows is a carefully described and very educational debug session from hell, involving reverse engineering ROMs, probing errant voltage rails, and large sample sizes. [Michael] doesn’t give us a sense for how long it took to isolate but given how minute the root cause was we’d bet that it was a long, long time.
The post stands alone as an exemplar for debugging nasty hardware glitches, but we’d like to call attention to the second root cause buried near the end of the post. What stopped the manufacturer wasn’t the hardware problem so much as a process issue which had been exposed. It turned out the bug had always been reproducible in about 3% of units but the factory had never mentioned it. Why? We’d suspect that [Michael]’s guess is correct. The operators who happened to perform the failing step had discovered a workaround years ago and transparently smoothed the failure over. Then there was a staff change and the new operator started flagging the failure instead of fixing it. Arguably this is what should have been happening the entire time, but in this one tiny corner of the process the manufacturing process had been slightly deviated from. For a little more color check out episode #440.2 of the Amp Hour to hear [Chris Gammell] talk about it with [Michael]. It’s a good reminder that a product is only as reliable as the process that builds it, and that process isn’t always as reliable as it seems.
Here’s a fun exercise: take a list of the 20th century’s inventions and innovations in electronics, communications, and computing. Make sure you include everything, especially the stuff we take for granted. Now, cross off everything that can’t trace its roots back to the AT&T Corporation’s research arm, the Bell Laboratories. We’d wager heavily that the list would still contain almost everything that built the electronics age: microwave communications, data networks, cellular telephone, solar cells, Unix, and, of course, the transistor.
But is that last one really true? We all know the story of Bardeen, Brattain, and Shockley, the brilliant team laboring through a blizzard in 1947 to breathe life into a scrap of germanium and wires, finally unleashing the transistor upon the world for Christmas, a gift to usher us into the age of solid state electronics. It’s not so simple, though. The quest for a replacement for the vacuum tube for switching and amplification goes back to the lab of Julius Lilienfeld, the man who conceived the first field-effect transistor in the mid-1920s.
The first thing I ever built without a kit was a 5 V regulated power supply using the old LM309K. That’s a classic linear regulator like a 7805. While they are simple, they waste a lot of energy as heat, especially if the input voltage goes higher. While there are still applications where linear regulators make sense, they are increasingly being replaced by switching power supplies that are much more efficient. How do switchers work? Well, you buy a switching power supply IC, add an inductor and you are done. Class dismissed. Oh wait… while that might be the best way to do it from a cost perspective, you don’t really learn a lot that way.
In this installment of Circuit VR, we’ll look at a simple buck converter — that is a switching regulator that takes a higher voltage and produces a lower voltage. The first one won’t actually regulate, mind you, but we’ll add that in a future installment. As usual for Circuit VR, we’ll be simulating the designs using LT Spice.
Interestingly, LT Spice is made to design power supplies so it has a lot of Linear Technology parts in its library just for that purpose. However, we aren’t going to use anything more sophisticated than an op amp. For the first pass, we won’t even be using those.
Over the recent weeks here at Hackaday, we’ve been taking a look at the humble transistor. In a series whose impetus came from a friend musing upon his students arriving with highly developed knowledge of microcontrollers but little of basic electronic circuitry, we’ve examined the bipolar transistor in all its configurations. It would however be improper to round off the series without also admitting that bipolar transistors are only part of the story. There is another family of transistors which have analogous circuit configurations to their bipolar cousins but work in a completely different way: the Field Effect Transistors, or FETs.
In a way it’s less pertinent to look at FETs in the way we did bipolar transistors, because while they are very interesting devices that power much of what you will do with electronics, you will encounter them as discrete components surprisingly rarely. Every CMOS device you deal with relies on FETs for its operation and every high-quality op-amp you throw a signal at will do so through a FET input, but these FETs are buried inside the chip and you’d be hard-pressed to know they were there if we hadn’t told you. You’d use a FET if you needed a high-impedance audio preamp or a low-noise RF amplifier, and FETs are a good choice for high-current switching applications, but sadly you will probably never have a pile of general-purpose FETs in the way you will their bipolar equivalents.
That said, the FET is a fascinating device. Join us as we take an in-depth look at their operation, and how and where you might use one.
A basic FET has three terminals, a source (the source of electrons), a gate (the control terminal), and a drain (where electrons leave the device). These are analogous to the terminals on a bipolar transistor, in that the source fulfills a similar role to the emitter, the gate to the base, and the drain to the collector. Thus the three basic bipolar transistor circuit configurations have equivalents with a FET; common-emitter becomes common-source, common-base becomes common-gate, and an emitter follower becomes a source follower. It is dangerous to stretch the analogy between bipolar transistors and FETs too far, though, because of their different mode of operation. A closer similarity exists between a FET and a triode tube, if that helps.
The simplest FET for demonstration purposes has a piece of N-type semiconductor with source and drain connections at opposite ends, and a zone of P-type semiconductor deposited in its middle. This is referred to as an N-channel junction FET or JFET, because the channel through which current flows is N-type semiconductor, and because a diode junction exists between gate and channel. There are equivalent P-channel devices, just as there are PNP and NPN bipolar transistors.
Were you to bias an n-channel JFET as you would a bipolar transistor with a positive bias on its gate, the diode between gate and source would conduct, and the transistor would remain a diode with two cathode terminals. If however you give the gate a negative bias compared to the source, the diode becomes reverse-biased, and no current to speak of flows in the gate.
A characteristic of a reverse-biased diode is that it has a depletion zone between anode and cathode, an area in which there are no electrons. This is what causes the diode to no longer conduct, and the size of the depletion zone depends upon the size of the electric field that exists across it. If you’ve ever used a varicap diode, the capacitance between the two sides of this variable-width zone is the property you are exploiting.
In a FET, the depletion zone stretches from the gate region into the channel, and since its size can be adjusted by the gate voltage it can be used to “pinch” the remaining conductive region within the channel. Thus the area through which electrons can flow is controlled by the gate voltage, and thus the current that flows between drain and source is proportional to the gate voltage. We have an amplifier.
In the JFET diagram above, the negative gate bias is represented by a battery. Tube enthusiasts may have encountered equipment that derives negative grid bias from a power supply, and you will find tube power units that include a -150 V rail for this purpose. In general though this is inconvenient in a FET circuit even though the voltage is lower, because of the extra cost of a negative regulator.. Instead the gate is held at a lower potential than the source by careful selection of a source resistor such that the current flowing through it brings the source up above ground, and a gate bias circuit that holds the gate close to ground. The base resistor chain from the bipolar circuit is for this reason often replaced with either a single resistor to ground, or a gate circuit with a very low DC resistance to ground such as an inductor.
MOSFETs, where the FET becomes more useful
The JFET we have described is the simplest of field-effect devices, but it is not the one you will encounter most frequently. MOSFETs, short for Metal Oxide Semiconductor FETs, have a similar source, gate, and drain, but instead of relying on a depletion zone in a reverse-biased diode, they have a thin layer of insulation. The electric field from the gate acts across this insulation and pinches the conductive region in the channel through repulsion of electrons, with the same effect as it has in the JFET. It is beyond the scope of this piece to go into their mechanisms, but you will encounter two types of MOSFET: depletion mode devices that require the same negative bias as the JFET, and enhancement mode MOSFETS that require a positive bias.
Why would you use a FET?
So we’ve described the FET, and noted that while its mode of operation is different to that of a bipolar transistor it does a substantially similar job. Why would we use a FET then, what advantages does it offer us? The answer comes from the gate being insulated either by a depletion region in a JFET or by an insulating layer in a MOSFET. A FET is a voltage amplifier rather than a current amplifier, its input impedance is many orders higher than that of a bipolar transistor, and thus you will find FETs used in many applications that require a high impedance small-signal amplifier. The input of a high-performance op-amp will almost certainly be a FET, for example.
The high input impedance has another effect less coupled to small signal work. Where a bipolar transistor requires significant base current to turn itself on, the corresponding FET requires almost none. Thus almost all complex integrated circuit logic devices are FET-based rather than bipolar because of the huge power saving that can be made by not needing to supply the base current demands of many thousands of bipolar transistors.
The same effect influences the choice of FETs for power switching, while a bipolar transistor’s base current is proportional to its collector current and thus it will need a significant driver, by contrast a power MOSFET requires virtually no standing gate current after an initial surge. A MOSFET power switch can thus be built requiring much less in the way of drive electronics and much more efficiently than a corresponding bipolar switch, and makes possible some of the tiny driver boards you might be used to for driving motors in your 3D printer, or your multirotor.
Through the course of this series you should have acquired a solid grounding in basic bipolar transistor principles, and now you should be able to add FETs to that knowledge base. We suggested you buy a bag of 2N3904s to experiment with in one of the previous articles, can we now suggest you do the same with a bag of 2N3819s?
[Keystone Science] recently posted a video about building a theremin — you know, the instrument that makes those strange whistles when you move your hands around it. The circuit is pretty simple (and borrowed) but we liked the way the video explains the theory and even dives into some of the math behind resonant frequencies.
The circuit uses two FETs for the oscillators. An LM386 amplifier (a Hackaday favorite) drives a speaker so you can use the instrument without external equipment. The initial build is on a breadboard, but the final build is on a PCB and has a case.
The Seadoo GTI Sea Scooter is a simple conveyance, consisting of a DC motor and a big prop in a waterproof casing. By grabbing on and firing the motor, it can be used to propel oneself underwater. However, [ReSearchITEng] had problems with their unit, and did what hackers do best – cracked it open to solve the problem.
Investigation seemed to suggest there were issues with the logic of the motor controller. The original circuit had a single FET, potentially controlled through PWM. The user interfaced with the controller through a reed switch, which operates magnetically. Using reed switches is very common in these applications as it is a cheap, effective way to make a waterproof switch.
It was decided to simplify things – the original FET was replaced with a higher-rated replacement, and it was switched hard on and off directly by the original reed switch. The logic circuitry was bypassed by cutting traces on the original board. [ReSearchITEng] also goes to the trouble of highlighting potential pitfalls of the repair – if the proper care isn’t taken during the reassembly, the water seals may leak and damage the electronics inside.
Overall it’s a solid repair that could be tackled by any experienced wielder of a soldering iron, and it keeps good hardware out of the landfill. For another take on a modified DC motor controller, check out the scooter project of yours truly.