Self Powered Camera Powers Itself

A self powered camera, showing output video

Cameras sense light to create images, and solar cells turn light into energy. Why not mash the two together and create a self-powered camera?

The Computer Vision Laboratory at Columbia built this unique camera, which harvests power from its photodiode sensors. These photodiodes also act as an array of pixels that can recover an image. The result is a black and white video camera that needs no external power supply.

The energy harvester circuit charges up a supercap that provides power to the system. The frame rate of the camera is limited by the energy that can be harvested: higher frame rates require more juice. For this reason, the team developed an algorithm that varies the frame rate based on available energy.

The MC13226V microcontroller that was used for this build features an internal 2.4 GHz radio. The group mentions wireless functionality as a possibility feature in the future, which would make for a completely untethered, battery free camera.

58 thoughts on “Self Powered Camera Powers Itself

  1. So, the uses are spy cameras hidden in foreign government buildings, and maybe along inaccessible roads (and mosques?) in certain unstable countries in the middle east. And maybe some space probes that need to conserve any energy they can.

    But the power per lux vs resolution makes me wonder if it’ll ever be high res and still be able to power itself.

    1. They do not say they have enough energy for transmitting the picture, it’s a lot more that is required event at a low data rate.Except running the mcu there is no real energy need for collecting the values.
      But for transmitting it, you also need to produce some kind of “ligth”. And the all process can’t be more than 100% efficient. So the picture won’t go much further than it was able at the original visible ligth frequency.
      Energy conservation at work, no matter the coding process.

      1. You wouldn’t want to transmit continuously. You would transmit an image in a burst, at a rate that is appropriate for the application, and at a rate that is allowed by the stored energy in the device.

        1. true, the trick is to under-sample, in time and or space. But from what i know from using BTLE or Zigbee with small PV cells of this size,do not expect more than a pixel every second.

          1. You can’t under-sample in this application because you need all the pixels to capture energy to power the circuitry.
            You could have your micro-controller do some sort of truncation or alternate row / column transmission of the data, but this thing is already pretty low resolution. As a proof of concept it is pretty neat, but it is huge (210x280mm sensitive area), has less than 50% fill factor, and not very practical. A similar sized photovoltaic could produce much more energy and could be coupled to a small CMOS color camera with power to spare for video transmission.

          2. So Dave, any thoughts on why this story is picked so enthusiastically by the various media and what the purpose is suppose to be which makes people so excited?
            Because I still don’t quite get it.

          3. I don’t get it either. I think “self powered” and “camera” in the same sentence makes people think of hidden spy cams and big brother watching them. But this thing is bigger than a toaster and difficult to hide.

  2. Definitely a good board to use a pick-and-place machine to populate.

    I would be more interested in trying something like this using thermistors in the array (forget the self powered part) to create a crude thermal camera.

        1. You can focus light and prevent light from other angles and points to end up on the sensor, but sound defies such things.

          There are many projects where people did the reverse and used a lightsensor or IR sensor as microphone though, and that’s interesting enough,.

      1. You will be able to do this, for example: https://www.youtube.com/watch?v=SiAXX2FsaxQ
        One problem that I see is focusing of the sound wave, however it is not a big issue, since you can take advantage of phase information from the sound wave. You have to sample all mics coherently, do some Fourier magic, and voila, you have acoustic camera.
        Replace mics with antennas… and you’ll have an antenna with electronically steerable beam :)

          1. I’m just tinkering along… but yes, it actually sounds like fun. I’ve never gotten around to understand the FFT (well I know what it does, but couldn’t hack some code myself) – so why not give it a try . But first – buy stuff, do a pcb design and a 3D case and throw it together. That’s fun, too. See what went wrong and warn others about it. I just like to work on stuff that sounds unusual and don’t look so much to the left and right.

          2. I was thinking in the past about building acoustic camera, so let me share my thoughts here.
            The key to success is to know the phases of the arriving signal at different points in space. They contain all information necessary to figure out from which direction the wave came. This is not an extremely complex problem, but not trivial either. Knowledge of math at level of Calculus 1 or 2, and basics of DSP should be sufficient. To know the phase of the signal, it is necessary to sample all microphones at precise intervals, with precisely known delays between the mics. If the delays will be non-zero, they will create artificial phase shifts, which will have to be taken into consideration during processing. The simplest case is when the shift is 0, what means that signals from all mics have to be sampled at the same time, what means that you need either an ADC or sample&hold circuit per microphone, all synchronized. As an alternative you can use miniature microphones with digital outputs, but then you’ll need to use CPLD or FPGA to aggregate individual streams. Of course the ADCs in the microphones also need to be synchronized.

            I strongly advise to not to start the project from pulling the credit card and buying components. Install Octave, simulate numerically the microphone array and signal source(s), and develop the processing algorithm based on this. Start with one signal source, once you will know how to tackle it, add another ones. This approach will help you to understand the problem, and save many very costly disappointments.

          3. I’m so far thinking of scanning a microphone matrix and display the gain values in a matrix that is the same size. There are some thermal modules that also scan “only” in a 4×4 matrix and this was shown here as well. I don’t want to analyze anything yet and I don’t want to track or spot the loudest something. My plan is to scan an array of microphones (6×5) as fast as possible and show the value in a grayscale. From there I will try to interpolate the data and try to overlay it and see what’s happening. My spendings are $7.50 for 30 microphones yet :)

          4. I’m thinking now the reflected acoustic spectra may be useful for obtaining information from the target. For example, an object that is black under visible light could be made of fabric, plastic, or metal. But those materials can have quite different acoustic properties. So if you were to use a white noise source, and that sound reflects off your target, the reflected frequency spectrum may have features that allow identification (or at least estimation) of the material / structure / composition that reflected it.

      1. I had that same problem. I designed a couple of different thermistor array boards, but hesitated to order them due to my fear of tremendous pixel cross-talk.

        Adding thermal reliefs between each pixel adds to the physical size of the array, and this requires a larger lens to obtain the same image. Large lenses (2 inch or larger) start to get expensive for thermal wavelengths. Although a Fresnel lens (like they use on PIR detectors) may work.

        1. What about a Fresnel reflector placed behind the thermistors? Even very thin layers of polished Aluminum is crazy efficient at reflecting IR (see: space blankets). You could 3D print and investment cast them out of beverage cans, then polish them until shiny.

    1. The thermistors would either have to be extremely small or the camera would have a lot of thermal mass (slow response)…
      Also, thermistors give out a VERY low voltage, very difficult to convert into something useful for semiconductors.

      If I remember correctly, amorphous silicon exhibits the pyroelectric effect, that would maybe be the way to go…

      1. “Also, thermistors give out a VERY low voltage, very difficult to convert into something useful for semiconductors.”

        I believe that’s what amplifiers are for. Some charlieplexing and power amps with rapid scanning could allow for a chip to do the job.

      2. And thermistors don’t produce a voltage. They change resistance with temperature. You can put them into a resistive divider circuit to produce a voltage proportional to temperature though.

  3. The sensor area is huge on this camera (208 x 280mm), so on that scale it is pretty easy to gather enough energy to power the circuitry (even if it’s only a 50% fill factor). If you tried to do this on a die, the area would be far too small to work.

      1. So they need to find a way to use an alternative not-so-expensive substrate instead of the expensive pure silicon.
        And if you do large pixels perhaps that’ll work since you don’t need the superfine material.

          1. That’s what continues to baffle me, why is this being researched? What is the big expectation here? That’s why I initially thought spy cameras of some sort since there you might not be able to install a solar panel, but even then it makes no sense, you can easily hide a solar panel if you take a reduction in efficiency for granted, or use EM fields to get power, or simply tie it into an existing electrical system, or power it by directed wireless energy. So I’m a bit confused about the purpose of it all

            But maybe the whole idea should be seen reversed, what this is is not a camera that gathers energy but instead a solar cell that can also see (as others here also mentioned). Although the problem then is focusing an image, but there might be trick for that like tiny lenses on each cell or small cluster and stuff like that. But I have a feeling there is stuff they are not telling us about their plan here.. some nastiness.

  4. Huh, don’t see any mention of the spectrum of the scene lighting. I’d guess they are using florescent studio lights? In that case, they’ll get a lot more energy if they use natural or halogen lighting.

  5. I realize they wouldn’t be wired correctly for this function, but if they were, I wonder if you could get any interesting “images” from a solar array/large solar farm, using (rewired) solar cells as “pixels”.

    Might be neat at sunset (with lots of FFT magic), but probably be a pretty featureless sky.

  6. Ok – I understand that this is somewhat new that they actually built it (In the paper they cite similar pixel designs that existed previously). For the purpose of getting accepted to CS conferences this is a win! On top, it is certainly a significant effort in construction to make it work!

    On the other hand – I think the system is far from ideal:
    – Why use only the direct frontal light for energy generation? A flat solar cell would capture light from more directions and potentially deliver more energy
    – For energy generation, a segmented array is not ideal – it would be better to use full sheet (of a solar cell) – currently the fill factor is only 16%
    – the efficiency of the new photo diodes is probably below that of availble photovoltaic cells. I expect an improvement even with an array of small chunks of solar cells.
    – the imaging capabilities are ok for the available power. Lets compare this to the Omnivision camera cube (similar quality). At QVGA it needs 120mW @ 60fps, this gives around 2mW for 1fps.
    The paper suspiciously avoids actual power measurements during imaging – but estimates 1.1mW @ 300lux, thus – the overall system is reasonable well constructed and low power!

    In comparison – think of a solar cell with a miniature camera next to it:
    – The current array is 210 x 280mm – a solar cell of this size use a array size, such a camera could be much smaller.
    Solar cells deliver 0.5uW / mm² at 333lux (http://electronics.stackexchange.com/questions/146340/what-is-the-theoretical-minimum-size-of-solar-cells-on-wrist-watches), giving 29mW for a sheet of the array size, 26x the power from the camera
    – would still be more efficient, even with all technological optimization of the above technology
    – cheaper to build

    But certainly, these discussions are omitted in the paper …

    1. Here’s a thought, what if you connected every solar cell you can get at on the planet and create one giant camera? Then use gravitational lensing in the distant universe as a distant lens :)
      Might not be possible at all but it would make a storyline for some scifi movie/show.
      Pity STNG is long in the past, they might have made an episode from the concept.

  7. Is it me or is everyone simply missing the point here. THIS IS A SELFPOWERED DEVICE. If something is capable of powering is self with no other energy source than with further development it could be used to generate energy by its self for use else where… I.e. as a power plant, a self powered power plant. Be it a camera or not, this tech could be leveraged to do so much more! !

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.