Taking Pictures With A DRAM Chip

 

DRAM Image

This picture was taken by using a DRAM chip as an image sensor (translated). A decapped 64k DRAM chip was combined with optics that could focus an image onto the die. By reading data out of the DRAM, the image could be constructed.

DRAM is the type of RAM you find on the RAM cards inserted into your motherboard. It consists of a massive array of capacitors and transistors. Each bit requires one transistor and one capacitor, which is quite efficient. The downside is that the memory needs to be refreshed periodically to prevent the capacitors from discharging.

Exposing the capacitor to light causes it to discharge faster. Once it has discharged past a certain threshold, the bit will flip from one to zero. To take a picture, ones are written to every bit in the DRAM array. By timing how long it takes a bit to flip from one to zero, the amount of light exposure can be determined. Since the DRAM is laid out in an array, each bit can be treated as a pixel to reconstruct the image.

Sure, modern CCDs are better, cheaper, and faster, but this hack is a neat way to totally re-purpose a chip. There’s even Turbo Pascal source if you’d like to recreate the project.

Thanks to [svofski] for the tip.

65 thoughts on “Taking Pictures With A DRAM Chip

  1. This camera is based on the Byte magazine article from 1983 which demonstrated the use of DRAMs as image sensors. Very clever use of technology and good to see someone trying this old technique again.

          1. I used to work for the company that made the camera they used (called the Beastie), it was based on the IS32 from Micron Technologies. Basically a de-capped 64kbit DRAM. Quite cool at the time, it had two disadvantages that there was a strip down the middle of the sensor between the two ram arrays that had no pixels, and that we had to mask off the drive and refresh logic parts of the chip otherwise incident light screwed with the timing.

        1. My faulty memory was jogged by a more recent comment, it was Ciarcia, not Wozniak.
          Both a couple of large bearded hardware/software hackers, how could I possibly have gotten them confused? B^)

      1. Getting a picture out of the device means reading all the bits at successively longer intervals and calculating a weighted sum out of them.

        Suppose you want to record an 8 bit pixel. You write a 1 to the cell and wait for 1 interval and record the (inverse of) memory cell value in the most significant bit of your pixel, then do the same for 2,4,8,16,32,64,128 intervals and fill the rest of the bits in order. The less light the cell is getting the longer it takes the bit to flip and you can record arbitrarily long and arbitrarily many intervals. It’s effectively the same as snapping eight 1 bit photos and stacking them in software.

        When you add more intervals and longer exposure times, you can capture very bright objects and very dark objects in the same image without overexposing the picture like would happen on a typical CMOS sensor that collects the photocurrent over the entire exposure time.

        You’re only limited by the fact that the DRAM cell leaks on its own, so at some point the internal leakage current becomes much stronger than the photocurrent and your image is drowned out in random noise. You can’t reliably take exposures stretching out to seconds or minutes. Fortunately the DRAM cell is also extremely sensitive because it carries such a small charge, so you don’t have to.

        A dedicated version of the DRAM camera would in many senses make a much better camera sensor than CCD or CMOS, but the inherent problem is the extremely high data rates you need, because you have to read and write something like 48 million cells 8 times in 16 milliseconds to capture a regular 12 Mpx photo. That’s a total IO rate of 46 Gbps for 1/60s exposure and it’s only getting worse from there.

        The CMOS/CCD can be read much slower.

        1. Although of course you could do the same thing as the CMOS sensor and only process a single line at a time, so the image is scanned in line-by-line, which causes the CMOS video wobble since it takes such a long time to scan the entire sensor.

          CCD is better in this respect because it can read the entire sensor at one go very rapidly. The CCD works like a bucket chain where you input a clock signal to a line of pixels and the data comes pouring out from the side of the chip because it was originally designed to directly output analog television signals. Reading the entire image at once would require many parallel extremely fast ADCs, so you dump the signal onto an identical CCD chip that is not exposed to light, and then read it however fast or slow you want.

          1. This is really cool. In principle, it should also be possible to take continuous video with exposure time for each pixel, depending on the amount of light hitting it. As soon as a bit got flipped, it would reset and record another single-pixel “frame”. This would result in a video with a high frame rate in the bright areas, and a slower frame rate in darker regions.
            In practice, this would be quite difficult to achieve. As you Dax points out, “reading the entire image at once would require many parallel extremely fast ADCs”. Even so, I wonder if there are niche applications where the benefits of a huge dynamic range (aka, high saturation/overexposure point plus low noise level for extremely wide dynamic range). Perhaps military, satellite imaging, or astronomy? Maybe there would be benefits to being able to capture a star and one of it’s orbiting bodies in one image? I don’t know if the dynamic range is quite THAT wide, but who knows what a custom setup could do.

          2. You’re confusing the DRAM camera with the CCD camera. The DRAM camera does not use or need ADCs because it is sampling the image digitally to start with, whereas the CCD/CMOS cameras need sophisticated and expensive hardware to sample the analog signal into digital data.

            That was the original reason why they started developing the DRAM camera. The camera sensor itself is the ADC that samples the incoming light by the successive approximation method. It’s very simple to make and operate, and it’s extremely fast if you don’t require high dynamic resolution. Imagine for example a camera on a pick & place machine that detects the orientation of the component on the picker by its shadow in the fraction of a second before it places it down on the circuit board.

            But yes. This technique allows for variable exposure time for individual pixels within a single frame, which is a technique that mimics how actual biological vision works.

          1. With a question mark at the end, means “Please could you summarise that more succinctly?”.

            In this case, no. That’s as simple as it gets. And if it’s too complicated I don’t think HaD is a site you’re going to enjoy.

        2. I also made a slight error. If the total exposure time is 16 ms and you have 8 intervals of 2^n in lenght, the first interval is only 125 microseconds long. Since there’s 48 million subpixels in a bayer pattern for a 12 mpx image, the peak throughput at the first interval would be 368.6 Gbps

          There is no possible way to read the entire frame fast enough.

          1. Modern video cards have memory bus capacity of around 800 Gbps but the problem is that you’re spending a hundred Watts of power to process all the data at that speed.

            Wouldn’t make for a very portable camera.

      1. You can, but you need multiple exposures which is slow, and the subject might move while you’re snapping the frames.

        The cycle time of an individual DRAM cell is easily just 5 nanoseconds, so for 8 bits of dynamic resolution you can snap a picture in about 1.3 microseconds, or nearly a million frames a second. Each bit of precision added doubles the exposure time, but even with 20 bits of dynamic resolution you can still shoot a frame with an exposure time of 1/200s

        And newer DRAM chips are pushing the cycle time below 1 ns.

      1. US (patent ) 4 983 843
        As long as the material in DRAM construction shifts electron orbits – de-broglie’s dual particle wave facit – then why not
        Paten relatesto Radon 1988 priority date

      1. Or you could use a pinhole made in lead/bismuth/something else really dense, covered with something opaque to visible light but transparent to X-rays, like some thick black construction paper.

        You could also use that set-up for x-ray/gamma-ray photography using black-and-white photographic film, if you set it up right.

    1. not really – nice “idea” – rafiation detected by the Compton effect – the radiation puts electron to a different orbit or , as required , knocks it from orbit to detect that charge – where electrons flowing through replace the sw-orvited atom – so gives a pulse

      capped or not radiation passesthrough – thus to “deect” neutrons – carbon based material parrafin wax , poly ethylenew

  2. Pretty cool. But kep in mind, the not – all – that – modern CCD came out of work on bubble memory. The big analog “bucket brigade” devices didn’t prove all that useful as memory. But they’ve had a pretty interesting life as imagers, even though they can’t really compete with today’s low noise CMOS sensors.

    1. Bubble memory is a magnetic technology that doesn’t really have anything to do with CCDs or BBDs (which are not the same thing). CCDs were sold as digital memories for a short time.

    1. The same can be said of your link since [Blah] posted it an hour earlier. I will never understand why people expect their comments to be read after skipping everyone else’s.

      1. Well, in this case the comment was composed almost as soon as the hack was posted, but only hit the Comments section more than an hour later because at the time, I was out in the sticks, sans interweb, (yes there are still such places on this planet), so by the time I was back in range, [Blah] had beaten me to it.

        However I do have a slightly different reason for being interested in this particular hack, namely that a bunch of us at the local computer club actually tried it back in the day, as I recall it kinda worked, but because the dram chip we used (a 4116 if my equally ancient memory serves me well), had the ram on the die in two distinct areas, there was a gap down the middle of the image.

        The ram chip in question was in a ceramic dip housing with a metal lid which was easily removed with a scalpel. Optics came from a darkroom enlarger lens (anybody old enough to remeber them?). The code to read the chip used a mix of basic and 6502 assembler. Results were somewhat hit and miss, but it did kinda work.. not well enough for us to set up in competition to Kodak unfortunately, or I’d be sitting on millions. Fun though.

  3. From that https://www.cs.uaf.edu/2007/fall/cs441/support/dram_sensor_1984_whitehead.pdf article it says the loss is due to thermal effects but also the photoelectric effect. Which makes me wonder if it’s not just the capacitor involved but also the light hitting the transistor that causes the transistor to conduct. I made use of that effect to make a solar panel of 2N3055 transistors with their cases opened, and then used that to power a small calaculator: http://hackaday.com/2012/04/13/using-diodes-and-transistors-as-solar-cells/ I’ve thought about making a _very_ lo res camera using open transistors but the DRAM chip beats that just a little :).

  4. Does anyone know the spectral sensitivity of this chip? The article notes that the chip seems to be more sensitive in the red region. The CCC article noted that it does have some infrared sensitivity.

  5. I wonder if this could be done with modern DRAM’s?

    Having an 8 Gigabit chip would lead to a crazy high resolution (we’re talking about 8 Gigapixels!), which could be good for lots of uses.

    The “binary-ness” of it isn’t a problem either, because assuming the image isn’t changing you can keep taking exposures at different amounts of exposure time to get almost arbitrary precision brightness. (At least 20 bits of precision I would assume).

    The tricky bit will be driving it (modern DRAM interfaces will be needing an FPGA). You might be able to put one of these DRAM chips in a computer and use software to do all your analysis, but you’ll need some way to ensure your OS doesn’t try to use any of the decapped RAM, and you’ll need to find a way to disable memory refresh of it, which could be hard.

      1. Looking in detail at the Silicon for a 1 megabit DRAM there are a lot of disjointed areas. It is like 2 banks of 16Kib connected in the middle. And those 32Kib squares are arranged in 8 columns by 4 rows.
        http://upload.wikimedia.org/wikipedia/commons/9/9b/MT4C1024-HD.jpg (25MiB image file)

        I would picture an 8 Gibit DRAM being just as disjointed, possibly using the same square 32Kibit building block just a lot more of them (and probably a good few extra blocks with fuses to increase production yield, or maybe manufacture all ECC memory and failures can be sold as non-ECC memory – after fusing). So any image produced would also have this disjointed structure, maybe it is 8x4x256Mibit squares or 8x4x64x64x64Kibit either way there will be rows and columns of missing pixels. And maybe even blocks missing as well.

        Normal image sensors have post processing algorithms to compensate for failed pixels, to increase production yields, maybe this or something similar could be used to interpolate the missing image data.

  6. Wow – this brings back memories. I first read about this technique in Martin Bradley Weinstein’s 1978 book “Android Design”. Cool early book on robotics. IIRC, he mentions his work was based on earlier research, but I don’t have a copy of the book handy to check.

  7. I did this in the early 70’s with a dynamic shift register chip (which was what we used for memory before DRAMs). This automatically gave a serial video output stream – the only complication being that alternate rows were backwards! Clocking out the serial data automatically refilled the pixels with a fresh lot of 1’s, ready to decay when the clock was paused for the “exposure”
    Infra-red sensitivity was astounding (with a naked chip), and the “exposure” time could be stretched as far as several minutes.
    I did not do much testing with DRAMS, as all the packages I opened had a brown epoxy glob over the silicon, “to absorb low-energy alpha radiation”. Makes you wonder if they would image alpha particle without this coating.
    For extreme UV imaging, try an old EPROM with a clear lid. You simply do a partial burn of the bits (just enough to change their state) and let the UV change them back.
    An EPROM with a frosted lid can be made into a useful, analog UV integrating light meter
    using a threshold burn, and driving an analog meter through 8 binary weighted resistors from the output.

  8. have some 4096 mostek dram – not that many , also 65k dram – all cerdip packaging – never got to building the imager . Maybe I should put them on ebay >>>>>>>>>>>>>>>>>> ???????????????????

  9. This is a great article! We really want to replicate the project and make improvements based on it. Some questions when reading the assembly code provided in the pascal .zip INITRAM.ASM and LESERAM.ASM:

    1. Which CPU was used to interface the NEC 4164 RAM?

    2. What was the printer port / parallel interface used in the diagram with pin numbers labeled? I am confused about the functions of those pins since the components used were not provided except NEC 4164 RAM.

    Anyone know information about the detail of the components used? I will really appreciate it. Thank you!

  10. hai

    ** the mostek 4096 x 1 as the 64k dram data on internet , some have the glass face – so not trimmed looks a little “rough”

    Also some plastic packaged 64 k dram

    ** depends what you want to do with it – I was going to tie it to either an Acorn Electron or Commadore PET – as the languages , interfaceing less of a problem

    ** With laptops and desk machines its all there but more than difficult to do anything other than what they designate . Those machines “played” happily for many hours .

    ** the chips – some basically for the price of postage – couple o (US)f doller bills

    bye for now _ D

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.