Identifying Fake Small-Signal Transistors

It’s rather amazing how many electronic components you can buy right now are not quite the genuine parts that they are sold as. Outside of dedicated platforms like Mouser, Digikey and LCSC you pretty much enter a Wild West of unverifiable claims and questionable authenticity. When it comes to sites like eBay and AliExpress, [hjf] would go so far as to state that any of the power transistors available for sale on these sites are 100% fake. But even small-signal transistors are subject to fakes, as proven in a comparison.

Found within the comparison are a Mouser-sourced BC546C, as well as a BC547C, SN3904 and PN2222A. These latter three all sourced from ‘auction sites’. As a base level test all transistors are put in a generic component tester, which identifies all of them correctly as NPN transistors, but the ‘BC547C’ and ‘PN2222A’ fail the test for having a much too low hFE. According to the generic tester at least, but it’s one red flag, along with the pin-out for the ‘BC547C’ showing up as being inverted from the genuine part.

Next is a pass through the HP4145B curve tracer, which confirms the fake BC547C findings, including the abysmal hFE. For the PN2222A the hFE is within spec according to the curve tracer, defying the component tester’s failing grade.

What these results make clear is that these cheap component testers are not a realistic ‘fake’ tester. It also shows that some of the fake transistors you find on $auction_site are clearly fake, while others are much harder to pin down. The PN2222A and 2N3904 used here almost pass the sniff test, but have that distinct off-genuine feeling, while the fake BC547C didn’t even bother to get its pinout right.

As always, caveat emptor. These cheapo transistors can be a nice source for some tinkering, just be aware of possibly wasting hours debugging an issue caused by an off-nominal parameter in a fake part.

The Great ADS1115 Pricing And Sourcing Mystery

The AdaFruit ADS1115 board hooked up for testing. (Credit: James Bowman)
The AdaFruit ADS1115 board hooked up for testing. (Credit: James Bowman)

Following up on the recent test of a set of purported ADS1115 ADCs sourced from Amazon [James Bowman] didn’t just test a genuine Ti part, but also dug into some of the questions that came up after the first article. As expected, the AdaFruit board featuring a presumed genuine Ti ADS1115 part performed very well, even performing significantly better on the tested parameters than the datasheet guarantees.

Thus we can confirm that when you get the genuine Ti part, you can expect very good and reliable performance for your ADC purposes. Which leaves the unaddressed questions about what these cheapo Amazon-sourced ADS1115 ICs are, and how it can be that LCSC has what should be the same parts for so much cheaper than US distributors?

As far as LCSC pricing is concerned, these are likely to be genuine parts, but also the subject of what is known as price discrimination. This involves pricing the same product differently depending on the targeted market segment, with e.g. Digikey customers assumed to be okay with paying more to get the brand name assurance and other assumed perks. Continue reading “The Great ADS1115 Pricing And Sourcing Mystery”

Making The Smallest And Dumbest LLM With Extreme Quantization

Turns out that training on Twitch quotes doesn't make an LLM a math genius. (Credit: Codeically, YouTube)
Turns out that training on Twitch quotes doesn’t make an LLM a math genius. (Credit: Codeically, YouTube)

The reason why large language models are called ‘large’ is not because of how smart they are, but as a factor of their sheer size in bytes. At billions of parameters at four bytes each, they pose a serious challenge when it comes to not just their size on disk, but also in RAM, specifically the RAM of your videocard (VRAM). Reducing this immense size, as is done routinely for the smaller pretrained models which one can download for local use, involves quantization. This process is explained and demonstrated by [Codeically], who takes it to its logical extreme: reducing what could be a GB-sized model down to a mere 63 MB by reducing the bits per parameter.

While you can offload a model, i.e. keep only part of it in VRAM and the rest in system RAM, this massively impacts performance. An alternative is to use fewer bits per weight in the model, called ‘compression’, which typically involves reducing 16-bit floating point to 8-bit, reducing memory usage by about 75%. Going lower than this is generally deemed unadvisable.

Using GPT-2 as the base, it was trained with a pile of internet quotes, creating parameters with a very anemic 4-bit integer size. After initially manually zeroing the weights made the output too garbled, the second attempt without the zeroing did somewhat produce usable output before flying off the rails. Yet it did this with a 63 MB model at 78 tokens a second on just the CPU, demonstrating that you can create a pocket-sized chatbot to spout nonsense even without splurging on expensive hardware.

Continue reading “Making The Smallest And Dumbest LLM With Extreme Quantization”

Built-In Batteries: A Daft Idea With An Uncertain Future

Having a gadget’s battery nestled snugly within the bowels of a device has certain advantages. It finally solves the ‘no batteries included’ problem, and there is no more juggling of AA or AAA cells, nor their respective chargers. Instead each device is paired to that one battery that is happily charged using a standardized USB connector, and suddenly everything is well in the world.

Everything, except for the devices that cannot be used while charging, wireless devices that are suddenly dragging along a wire while charging and which may have charging ports in irrational locations, as well as devices that would work quite well if it wasn’t for that snugly embedded battery that’s now dead, dying, or on fire.

Marrying devices with batteries in this manner effectively means tallying up all the disadvantages of the battery chemistries and their chargers, adding them to the device’s feature list, and limiting their effective lifespan in the process. It also prevents the rapid swapping with fresh batteries, which is why everyone is now lugging chunky powerbanks around instead of spare batteries, and hogging outlets with USB chargers. And the task of finding a replacement for non-standardized pouch cell batteries can prove to be hard or impossible.

Looking at the ‘convenience’ argument from this way makes one wonder whether it is all just marketing that we’re being sold. Especially in light of the looming 2027 EU regulation on internal batteries that is likely to wipe out the existence of built-in batteries with an orbital legal strike. Are we about to say ‘good riddance’ to a terrible idea?

Continue reading “Built-In Batteries: A Daft Idea With An Uncertain Future”

UK’s MAST Upgrade Tokamak Stabilizes Plasma With Edge Magnetic Fields

Although nuclear fusion is exceedingly easy to achieve, as evidenced by desktop fusors, the real challenges begin to pop up whenever you try to sustain a plasma for extended periods of time, never mind trying to generate net energy output. Plasma instability was the reason why 1950s UK saw its nuclear fusion hopes dashed when Z-pinch fusion reactors failed to create a stable plasma, but now it seems that another UK fusion reactor is one step closer to addressing plasma instability, with the MAST Upgrade tokamak demonstrating the suppressing of ELMs.

ELMs, or edge localized modes, are instabilities that occur at the edge of the plasma. A type of magnetohydrodynamic instability, ELMs were first encountered after the switch to high-confinement mode (H-mode) to address instability issues encountered in the L-mode operating regime of previous tokamaks. These ELMs cause damage on the inside of the reactor vessel with these disturbances ablating the plasma-facing material.

One of the solutions proposed for ELMs are resonant magnetic perturbations (RMPs) using externally applied magnetic fields, with the South-Korean KSTAR tokamak already suppressing Type I ELMs using this method in 2011. Where the KSTAR and MAST Upgrade tokamaks differ is that the latter is a spherical tokamak, different from the more typical toroidal tokamak. As the name suggests, a spherical tokamak creates a sphere-like plasma rather than a doughnut-shape, with potential efficiency improvements.

All of this means that the MAST Upgrade tokamak can continue its testing campaign, as tokamaks around the globe keep trying to hit targets like the Greenwald Density Limit and other obstacles that stand in the way of sustained net energy production. Meanwhile stellarators seem to be surpassing one milestone after another, with the German Wendelstein 7-X being the current flagship project.

Top image: Inside MAST Upgrade,  showing the magnetic field coils used to control ELMs. Credit: United Kingdom Atomic Energy Authority

The Lambda Papers: When LISP Got Turned Into A Microprocessor

The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)
The physical layout of the SCHEME-78 LISP-based microprocessor by Steele and Sussman. (Source: ACM, Vol 23, Issue 11, 1980)

During the AI research boom of the 1970s, the LISP language – from LISt Processor – saw a major surge in use and development, including many dialects being developed. One of these dialects was Scheme, developed by [Guy L. Steele] and [Gerald Jay Sussman], who wrote a number of articles that were published by the Massachusetts Institute of Technology (MIT) AI Lab as part of the AI Memos. This subset, called the Lambda Papers, cover the ideas from both men about lambda calculus, its application with LISP and ultimately the 1980 paper on the design of a LISP-based microprocessor.

Scheme is notable here because it influenced the development of what would be standardized in 1994 as Common Lisp, which is what can be called ‘modern Lisp’. The idea of creating dedicated LISP machines was not a new one, driven by the processing requirements of AI systems. The mismatch between the S-expressions of LISP and the typical way that assembly uses the CPUs of the era led to the development of CPUs with dedicated hardware support for LISP.

The design described by [Steele] and [Sussman] in their 1980 paper, as featured in the Communications of the ACM, features an instruction set architecture (ISA) that matches the LISP language more closely. As described, it is effectively a hardware-based LISP interpreter, implemented in a VLSI chip, called the SCHEME-78. By moving as much as possible into hardware, obviously performance is much improved. This is somewhat like how today’s AI boom is based around dedicated vector processors that excel at inference, unlike generic CPUs.

During the 1980s LISP machines began to integrate more and more hardware features, with the Symbolics and LMI systems featuring heavily. Later these systems also began to be marketed towards non-AI uses like 3D modelling and computer graphics. As however funding for AI research dried up and commodity hardware began to outpace specialized processors, so too did these systems vanish.

Top image: Symbolics 3620 and LMI Lambda Lisp machines (Credit: Jason Riedy)

High Performance Motor Control With FOC From The Ground Up

Testing the FOC-based motor controller. (Credit: Excessive Overkill, YouTube)
Testing the FOC-based motor controller. (Credit: Excessive Overkill, YouTube)

Vector Control, also known as Field Oriented Control or FOC is an AC motor control scheme that enables fine-grained control over a connected motor, through the precise control of its phases. In a recent video [Excessive Overkill] goes through the basics and then the finer details of how FOC works, as well as how to implement it. These controllers generally uses a proportional integral (PI) loop, capable of measuring and integrating the position of the connected motor, thus allowing for precise adjustments of the applied vector.

If this controller looks familiar, it is because we featured it previously in the context of reviving old industrial robotic arms. Whether you are driving the big motors on an industrial robot, or a much smaller permanent magnet AC (PMAC) motor, FOV is very likely the control mechanism that you want to use for the best results. Of note is that most BLDC motors are actually also PMACs with ESC to provide a DC interface.

Continue reading “High Performance Motor Control With FOC From The Ground Up”