The LED is one of those fundamental building block components in electronics, something that’s been in the parts bin for decades. But while a simple LED costs pennies, that WS2812 or other fancy device is a bit expensive because internally it’s a hybrid of a silicon controller chip and several LEDs made from other semiconductor elements. Incorporating an LED on the same chip as its controller has remained something of a Holy Grail, and now an MIT team appear to have cracked it by demonstrating a CMOS device that integrates a practical silicon LED. It may not yet be ready for market but it already displays some interesting properties such as a very fast switching speed. Perhaps more importantly, further integration of what have traditionally been discrete components would have a huge impact on reducing manufacturing costs.
Anyone who has read up on the early history of LEDs will know that the path from the early-20th-century discoveries of semiconductor luminescence through the early commercial devices of the 1960s and up to the bright multi-hued devices of today has been a long one with many stages of the technology reaching the market. Thus these early experimental silicon LEDs produce light in the infrared spectrum often useful in producing sensors. Whether we’ll see an all-silicon Neopixel any time soon remains to be seen, but we can imagine that some sensors using LEDs could be incorporated on the same die as a microcontroller. It seems there’s plenty of potential for this invention.
This research was presented earlier this month at the IEDM Conference in a talk entitled Low Voltage, High Brightness CMOS LEDs. We were not able to find a published paper, we’d love read deeper so let us know in the comments below if you have info on when this will become available. In the meantime, anyone with any interest in LED technology should read about Oleg Losev, the inventor of the first practical LEDs.
From the linked scitechdaily article: “Smartphones, for example, can use an LED proximity sensor to determine if you’re holding the phone next to your face … The LED sends a pulse of light toward your face, and a timer in the phone measures how long it takes that light to reflect back to the phone”
Is that true? I always assumed it looked for the brightness of the reflection (i.e., present or not present), not the time of flight?
I used to have an HP48 calculator with a hackish bit of software that would use the IR serial port hardware to emit a pulse of IR light and indicate if the reflection was detected. It was an effective short range proximity sensor, and a fun gimmick as well because IIRC it clicked like a geiger counter.
It depends. Newer high-end phones use in deed a Time-of-Flight sensor with a VCSEL (Apple, Samsung). Most other phones use the traditional approach, because it is cheaper.
news . mit . edu / 2020 / led-computer-chips-1214 is the official MIT press release, and the linked article is basically just a copy
www . eenewseurope . com / news / first-cmos-led-chip-interconnect has a bit more interesting detail from the authors
thx and btw: if you add a few lines of text to your links hackaday won’t delete your post. no need to make links unclickable by adding spaces :)
So:
https://news.mit.edu/2020/led-computer-chips-1214
and:
https://www.eenewseurope.com/news/first-cmos-led-chip-interconnect
done
That’s rather hit and miss. Sometimes they do, sometimes they don’t.
Here is where i draw the line.
people keep telling us how much cheaper electronics are and how the processes became cheaper… but the price of finished products keep going up.
It seems like consumer electronics (like phones) usually get much better performance over time at a slightly increased price, so performance:price ratio keeps going up. But in some cases it is more dramatic: Have you looked at what you can get in a Raspberry Pi? Or the price of a budget television? I’m guessing their specs and price represent a 10X cost reduction from about 15 years ago.
Indeed; 15 years ago, I’d have to use something like a 4U rackmount and 8U worth of storage appliances in order to replicate my low-rent Plex rig of a Pi4 and a 5 TB laptop USB drive, which is under 1% of what the old rig would have cost, and probably .1% of the power usage…
How often do you need to replace a TV?
How often a gpu or cpu?
More often than my phone. I tend to make upgrades to my PC every two years or so and gift the older parts to friends who have wanted to build PCs but haven’t been able to. I don’t care for the increasing cost of smartphones, which is part of why I have used the same one for four years now. I don’t think they would offer a dramatically improved user experience anyways.
There are a lot of things that make the top phones more expensive. Some that come to mind:
-cameras got better for the same price, but we put more cameras in the phones
-the cost per transistor is not going down anymore for a few years, so memory & processing power which uses more and more transistors is getting more expensive
-screens got cheaper per area, but were made bigger, with bent corners, camera holes and fingerprint sensors which add cost faster.
As someone who uses a flagship Note phone because of the pen function and also phones that cost a lot less I can say that the realistic use differences between a 200 and a 1000 eur/usd phone are minimal. The vast majority of functions are the same, the cheaper one just has lower performance, which in most cases does not matter.
What electronics product has become more expensive in real purchase value? Pretty much nothing I can think of. Of course due to inflation the numbers will change from year to year.
The Hadron Accelerator thingy?
B^)
Maybe it’s that PC parts are returning to professional/enthusiast pricing from the mass market prices they dropped to in the late 90s through early noughties desktop computing boom. Though we forget that in 1984 you could probably get an ex-demo 1984 Yugo for the same price as IBMs new Professional Graphics Adapter card. I think it’s about 10-12 grand today that would get you a similar deal, stripper model ex demo Mitsubishi Mirage or Chevy Spark or something.
I see it the other way round, the price of electronics continually dropping is what messes up inflation calculations for stuff that’s important like food and housing. I don’t know if a lot of people care that they can get the features of a 5 year old $1000 iPhone in the low end $150 base model when in 5 years their grocery bill and housing costs have gone up 50% but because “inflation at all time lows” their earnings only gone up 10%
Looking at how the CPI is calculated, electronics is a tiny portion of it, and they do upgrade e.g. television sizes to keep with the progress. I doubt the price development of electronics affects the inflation values much at all.
Right, but if they keep substituting foodstuffs out of it, like “not steak now because it’s too expensive, just regular ground meat.” We’ll end up at “A case of Mr Noodles is just as affordable as 5 roast dinners for the family were twenty years ago.”
Another step closer to processors running at optical speeds? I. e. fiber optic data/address buses, optical ALUs
A fully silicon Neopixel can’t be a thing, I’m pretty sure, and the linked article does not talk about that. I’ll try to explain what I know of the reason why.
The color of light coming out of a LED depends on the bandgap, which is defined by the material of the cristal (silicon, germanium, … ).
In case someone doesn’t know, semiconductors’ electrons need some extra energy to start conducting (unlike a metal whose outer electrons are always in the “conduction band”), such as a voltage applied to a diode the right way around. A photon is emitted when an electron goes back to a lower energy state (for all materials), meaning that an electron coming from the conduction band to the lower band (the Valence bands) will release a photon of exactly the energy that separates the conduction band and the first lower one (except for indirect bandgap semiconductors, where it is lower but still constant, more on that later).
This energy difference is called the bandgap, is measured in volts and you probably already know it because it’s very closely related to the voltage drop of a diode ( https://en.wikipedia.org/wiki/Band_gap#List_of_band_gaps ).
That means silicon can only emit in IR (except maybe LASER diodes, I’m not sure), because its bandgap makes for photons in the IR.
Visible LEDs are normally made out of gallium based semiconductors (the “III-V semiconductors” the publi talks about). Theses are not pure but a mix of gallium and something else (which is different than doping, you have a single uniform crystal of a mix of materials), such as galium – arsenic, indium – gallium – nitrate and such.
The nice thing about those is that you can vary the bandgap by varying the ratio of each material, that’s why we have a bunch of LED colors.
Also, silicon is a indirect bandgap material, meaning once an electron jumps into the conduction band, it has to emit a “phonon” (a small amount of heat) in addition to the photon to go back to the lower band, contrary to the gallium based semiconductor used for LEDs, which have a direct bandgap.
I don’t know how much it hurts light production efficiency but it does and the publi talks about it.
So (single color) addressable LEDs with a single substrate (which is not what the linked article talks about) sounds cool, but Si will still be IR, InGaN blue (or thereabout), …
Growing gallium based semiconductors onto silicium (or germanium) is possible (and that’s how the fancy 30% efficient multi layer solar cells you sometimes find on satellites are made), so creating a “single chip Neopixel” might be possible this way, but not with pure Si (and I don’t expect it to be cheaper than a bit of wire bonding any time soon).
On the other hand, what the article talks about, which is being able to create a LED with a process that can also create transistors, is pretty cool.
LEDs are not made the same way as regular diodes because you want to increase the amount of electrons dropping back to the valance band (thus generating photons, that’s called “recombination”) instead of avoiding it. You therefore use PIN diodes instead of PN diodes, there’s a undoped bit in the middle of the junction. The need for this undopped bit means you can’t pre-dope the whole substrate (add boron to the molten silicon) like you do for CMOS chips. Even mixing CMOS and junction technologies is weird and rare, that’s a step further.
A good take.
I would add one bit of trivia which I think might make it possible even so, with just silicon. I stumbled on this phrase, “quantum dot.” What I don’t understand is obviously much bigger than what I do but I have the idea that the effective color of a quantum dot depends on its mechanical size, not just on the elements that compose it. So I don’t know how it would work but I think people are having astonishing successes at using geometry instead of composition to change the emitted frequency.
10 years? 50 years? *shrug*
Quantum dot, like regular phosphor used in white LEDs (that are blue LEDs underneath), can only increase wavelength (decrease photon energy).
Basically they absorb a photon and give out two. The (only?) interest of quantum dots is that you don’t have to re-invent a new phosphor for each color (and I believe they’re fairly efficient) because you “just” have to change its size. Otherwise it works as a phosphor.
You can do upconvertion, thou. There are crystals called nonlinear crystals that can absorb a photon and keep the electron at the energized state long enough for it to capture another photon and then emit the sum of both photons at once. It’s used in green lasers, they are usually IR lasers with a nonlinear crystal on top adding two IR photons to a green one. It’s not horribly efficient I believe.
The thing is, in any case, the LED produces photons at the energy of their bandgaps. You can do stuff to the light afterward, but it’s the same for any kind of light source and it generally decreases efficiency.
Though another thing to remember, is that mere area of purified silicon wafer is pretty darn expensive. So you’re not gonna have a one inch diagonal LED display on silicon coming out at price parity with other 1″ display technologies… it’s probably $100 worth of substrate.
It’s not really the wafers that are expensive (you can buy some on ebay for 40$ a 25 pack if you want), it’s just that you pay per wafer and the bigger your chip the less chips you can fit on each, so basically the price of your chip depends on its surface (if you ignore the very big upfront cost of creating the photoresist masks).
Also, displays based on silicon (with the same area of silicon as the area of display) exist, but the light has to come from some other source (such as 3 leds), they are called LCOS display and have been used by fatshark for their FPV googles for a while and by some small video projectors. They are not that expensive.
What would make the display very expensive would be to try and lay indium based semiconductors on top of the silicon substrate to generate each color directly (especially because you’d need a different semi for each color, as said above, meaning 4 layers of semiconductors). It’s a thing but it’s very niche and “state of the art” (yields are probably very bad), that is what makes the multi layers solar cells so expensive (2 000$ for less than 10×10 cm of cell: https://www.endurosat.com/cubesat-store/cubesat-solar-panels/1u-solar-panel-x-y/#modifications ).
Stacking dies and soldering or wire bonding them is already a (commercial and not too expensive) thing too, it’s growing nice uniform crystals on top of each other that would be expensive.
That is also why noncrystalline solar panels are not that expensive, it’s just a big diode, so no mask required and the silicon wafer itself is fairly cheap.
STOP Calling them “neopixels”. “Neopixel” is a brand name of a company that price-gouges. If you buy their “neopixels” you are paying up to 5x what it costs for the same exact thing from many other suppliers.