Cameras sense light to create images, and solar cells turn light into energy. Why not mash the two together and create a self-powered camera?
The Computer Vision Laboratory at Columbia built this unique camera, which harvests power from its photodiode sensors. These photodiodes also act as an array of pixels that can recover an image. The result is a black and white video camera that needs no external power supply.
The energy harvester circuit charges up a supercap that provides power to the system. The frame rate of the camera is limited by the energy that can be harvested: higher frame rates require more juice. For this reason, the team developed an algorithm that varies the frame rate based on available energy.
The MC13226V microcontroller that was used for this build features an internal 2.4 GHz radio. The group mentions wireless functionality as a possibility feature in the future, which would make for a completely untethered, battery free camera.
So, the uses are spy cameras hidden in foreign government buildings, and maybe along inaccessible roads (and mosques?) in certain unstable countries in the middle east. And maybe some space probes that need to conserve any energy they can.
But the power per lux vs resolution makes me wonder if it’ll ever be high res and still be able to power itself.
They do not say they have enough energy for transmitting the picture, it’s a lot more that is required event at a low data rate.Except running the mcu there is no real energy need for collecting the values.
But for transmitting it, you also need to produce some kind of “ligth”. And the all process can’t be more than 100% efficient. So the picture won’t go much further than it was able at the original visible ligth frequency.
Energy conservation at work, no matter the coding process.
You wouldn’t want to transmit continuously. You would transmit an image in a burst, at a rate that is appropriate for the application, and at a rate that is allowed by the stored energy in the device.
true, the trick is to under-sample, in time and or space. But from what i know from using BTLE or Zigbee with small PV cells of this size,do not expect more than a pixel every second.
You can’t under-sample in this application because you need all the pixels to capture energy to power the circuitry.
You could have your micro-controller do some sort of truncation or alternate row / column transmission of the data, but this thing is already pretty low resolution. As a proof of concept it is pretty neat, but it is huge (210x280mm sensitive area), has less than 50% fill factor, and not very practical. A similar sized photovoltaic could produce much more energy and could be coupled to a small CMOS color camera with power to spare for video transmission.
So Dave, any thoughts on why this story is picked so enthusiastically by the various media and what the purpose is suppose to be which makes people so excited?
Because I still don’t quite get it.
I don’t get it either. I think “self powered” and “camera” in the same sentence makes people think of hidden spy cams and big brother watching them. But this thing is bigger than a toaster and difficult to hide.
The math is easy, E=h.f for each photon.
Take f for green light and an efficiency of max 10%.
From the Lux and the surface you have the number of photons/sec.
Definitely a good board to use a pick-and-place machine to populate.
I would be more interested in trying something like this using thermistors in the array (forget the self powered part) to create a crude thermal camera.
or microphones – would be an interesting experience!
Yes! An acoustic camera! Maybe ultrasonic sensors?
As a 3D camera? would be nice to pare all this with a camera and do an overlay.
That would be sweet.
I’m tempted to buy some cheap ebay microphones to start with. Is 6×5 a good matrix?
I think that’s a good place to start. I would try to keep them spaced out pretty far to start with (maybe 6 inches between each one).
Ordered :) Gonna start a project on hackaday.io in a minute.
Awesome. I will check that out.
https://hackaday.io/project/5326-microphone-camera
You can focus light and prevent light from other angles and points to end up on the sensor, but sound defies such things.
There are many projects where people did the reverse and used a lightsensor or IR sensor as microphone though, and that’s interesting enough,.
You would need a way to make the microphones more directional. Something like a “shotgun” mike.
Or you use beam forming to make the array directional…
http://www.mathworks.com/help/phased/examples/acoustic-beamforming-using-a-microphone-array.html
You will be able to do this, for example: https://www.youtube.com/watch?v=SiAXX2FsaxQ
One problem that I see is focusing of the sound wave, however it is not a big issue, since you can take advantage of phase information from the sound wave. You have to sample all mics coherently, do some Fourier magic, and voila, you have acoustic camera.
Replace mics with antennas… and you’ll have an antenna with electronically steerable beam :)
wuaahhh, don’t take the fun out of it :) but looks cool. Thanks!
No, that’s just the beginning of the _real_ fun :)
I’m just tinkering along… but yes, it actually sounds like fun. I’ve never gotten around to understand the FFT (well I know what it does, but couldn’t hack some code myself) – so why not give it a try . But first – buy stuff, do a pcb design and a 3D case and throw it together. That’s fun, too. See what went wrong and warn others about it. I just like to work on stuff that sounds unusual and don’t look so much to the left and right.
I was thinking in the past about building acoustic camera, so let me share my thoughts here.
The key to success is to know the phases of the arriving signal at different points in space. They contain all information necessary to figure out from which direction the wave came. This is not an extremely complex problem, but not trivial either. Knowledge of math at level of Calculus 1 or 2, and basics of DSP should be sufficient. To know the phase of the signal, it is necessary to sample all microphones at precise intervals, with precisely known delays between the mics. If the delays will be non-zero, they will create artificial phase shifts, which will have to be taken into consideration during processing. The simplest case is when the shift is 0, what means that signals from all mics have to be sampled at the same time, what means that you need either an ADC or sample&hold circuit per microphone, all synchronized. As an alternative you can use miniature microphones with digital outputs, but then you’ll need to use CPLD or FPGA to aggregate individual streams. Of course the ADCs in the microphones also need to be synchronized.
I strongly advise to not to start the project from pulling the credit card and buying components. Install Octave, simulate numerically the microphone array and signal source(s), and develop the processing algorithm based on this. Start with one signal source, once you will know how to tackle it, add another ones. This approach will help you to understand the problem, and save many very costly disappointments.
I’m so far thinking of scanning a microphone matrix and display the gain values in a matrix that is the same size. There are some thermal modules that also scan “only” in a 4×4 matrix and this was shown here as well. I don’t want to analyze anything yet and I don’t want to track or spot the loudest something. My plan is to scan an array of microphones (6×5) as fast as possible and show the value in a grayscale. From there I will try to interpolate the data and try to overlay it and see what’s happening. My spendings are $7.50 for 30 microphones yet :)
I’m thinking now the reflected acoustic spectra may be useful for obtaining information from the target. For example, an object that is black under visible light could be made of fabric, plastic, or metal. But those materials can have quite different acoustic properties. So if you were to use a white noise source, and that sound reflects off your target, the reflected frequency spectrum may have features that allow identification (or at least estimation) of the material / structure / composition that reflected it.
I’ve definitely considered arraying a bunch of thermistors into a thermal-camera. Haven’t figured out a good way to thermally isolate adjacent thermistors that’s compatible with a pick-n-place.
I had that same problem. I designed a couple of different thermistor array boards, but hesitated to order them due to my fear of tremendous pixel cross-talk.
Adding thermal reliefs between each pixel adds to the physical size of the array, and this requires a larger lens to obtain the same image. Large lenses (2 inch or larger) start to get expensive for thermal wavelengths. Although a Fresnel lens (like they use on PIR detectors) may work.
What about a Fresnel reflector placed behind the thermistors? Even very thin layers of polished Aluminum is crazy efficient at reflecting IR (see: space blankets). You could 3D print and investment cast them out of beverage cans, then polish them until shiny.
The thermistors would either have to be extremely small or the camera would have a lot of thermal mass (slow response)…
Also, thermistors give out a VERY low voltage, very difficult to convert into something useful for semiconductors.
If I remember correctly, amorphous silicon exhibits the pyroelectric effect, that would maybe be the way to go…
“Also, thermistors give out a VERY low voltage, very difficult to convert into something useful for semiconductors.”
I believe that’s what amplifiers are for. Some charlieplexing and power amps with rapid scanning could allow for a chip to do the job.
And thermistors don’t produce a voltage. They change resistance with temperature. You can put them into a resistive divider circuit to produce a voltage proportional to temperature though.
The sensor area is huge on this camera (208 x 280mm), so on that scale it is pretty easy to gather enough energy to power the circuitry (even if it’s only a 50% fill factor). If you tried to do this on a die, the area would be far too small to work.
Unless you used a whole wafer.
So they need to find a way to use an alternative not-so-expensive substrate instead of the expensive pure silicon.
And if you do large pixels perhaps that’ll work since you don’t need the superfine material.
Or just use a separate solar panel?
Yep. A cheap CMOS color camera and solar panel.
That’s what continues to baffle me, why is this being researched? What is the big expectation here? That’s why I initially thought spy cameras of some sort since there you might not be able to install a solar panel, but even then it makes no sense, you can easily hide a solar panel if you take a reduction in efficiency for granted, or use EM fields to get power, or simply tie it into an existing electrical system, or power it by directed wireless energy. So I’m a bit confused about the purpose of it all
But maybe the whole idea should be seen reversed, what this is is not a camera that gathers energy but instead a solar cell that can also see (as others here also mentioned). Although the problem then is focusing an image, but there might be trick for that like tiny lenses on each cell or small cluster and stuff like that. But I have a feeling there is stuff they are not telling us about their plan here.. some nastiness.
That totally wouldn’t cost more then several full frame DLSRs :P
Why isn’t this used in smart phone oled displays? aren’t they capable? I always wondered about that.
OLED digiback?
http://en.wikipedia.org/wiki/Digital_camera_back like this?
Huh, don’t see any mention of the spectrum of the scene lighting. I’d guess they are using florescent studio lights? In that case, they’ll get a lot more energy if they use natural or halogen lighting.
Redudant title is redundant
Stupid comments are stupid.
I realize they wouldn’t be wired correctly for this function, but if they were, I wonder if you could get any interesting “images” from a solar array/large solar farm, using (rewired) solar cells as “pixels”.
Might be neat at sunset (with lots of FFT magic), but probably be a pretty featureless sky.
interesting…
But they are wired correctly. The photo-diodes used in this camera are not biased, and they are using them in photo-voltaic mode. So, I think your idea would work.
The example “video” looks like something out of Peter Gabriel’s “Sledgehammer” video ;)
Reminds me more of the first camera ever experiments, real classic stuff.
Ok – I understand that this is somewhat new that they actually built it (In the paper they cite similar pixel designs that existed previously). For the purpose of getting accepted to CS conferences this is a win! On top, it is certainly a significant effort in construction to make it work!
On the other hand – I think the system is far from ideal:
– Why use only the direct frontal light for energy generation? A flat solar cell would capture light from more directions and potentially deliver more energy
– For energy generation, a segmented array is not ideal – it would be better to use full sheet (of a solar cell) – currently the fill factor is only 16%
– the efficiency of the new photo diodes is probably below that of availble photovoltaic cells. I expect an improvement even with an array of small chunks of solar cells.
– the imaging capabilities are ok for the available power. Lets compare this to the Omnivision camera cube (similar quality). At QVGA it needs 120mW @ 60fps, this gives around 2mW for 1fps.
The paper suspiciously avoids actual power measurements during imaging – but estimates 1.1mW @ 300lux, thus – the overall system is reasonable well constructed and low power!
In comparison – think of a solar cell with a miniature camera next to it:
– The current array is 210 x 280mm – a solar cell of this size use a array size, such a camera could be much smaller.
Solar cells deliver 0.5uW / mm² at 333lux (http://electronics.stackexchange.com/questions/146340/what-is-the-theoretical-minimum-size-of-solar-cells-on-wrist-watches), giving 29mW for a sheet of the array size, 26x the power from the camera
– would still be more efficient, even with all technological optimization of the above technology
– cheaper to build
But certainly, these discussions are omitted in the paper …
Here’s a thought, what if you connected every solar cell you can get at on the planet and create one giant camera? Then use gravitational lensing in the distant universe as a distant lens :)
Might not be possible at all but it would make a storyline for some scifi movie/show.
Pity STNG is long in the past, they might have made an episode from the concept.
I thought they did that with the Vulcans secretly spying on the Andorians in ST:ENT or am I wrong?
Can’t say I know, but it would not surprise me.
Is it me or is everyone simply missing the point here. THIS IS A SELFPOWERED DEVICE. If something is capable of powering is self with no other energy source than with further development it could be used to generate energy by its self for use else where… I.e. as a power plant, a self powered power plant. Be it a camera or not, this tech could be leveraged to do so much more! !
It is not self powered. It is solar powered, just like a solar cell. The interesting factor is that they are using the same part to do two things at once.