[Jordan] managed to cobble together his own version of a low resolution digital camera using just a few components. The image generated is pretty low resolution and is only in grey scale, but it’s pretty impressive what can be done with some basic hardware.
The heart of the camera is the image sensor. Most consumer digital cameras have tons of tiny receptors all jammed into the sensor. This allows for a larger resolution image, capturing more detail in a smaller space. Unfortunately this also usually means a higher price tag. [Jordan’s] sensor includes just a single pixel. The sensor is really just an infrared photodiode inside of a tube. The diode is connected to an analog input pin on an Arduino. The sensor can be pointed at an object, and the Arduino can sense the brightness of that one point.
In order to compile an actual image, [Jordan] needs to obtain readings of multiple points. Most cameras do this using the large array of pixels. Since [Jordan’s] camera only has a single pixel, he has to move it around and take each reading one at a time. To accomplish this, the Arduino is hooked up to two servo motors. This allows the sensor to be aimed horizontally and vertically. The Arduino slowly scans the sensor in a grid, taking readings along the way. A Processing application then takes each reading and compiles the final image.
Since this camera compiles an image so slowly, it sometimes has a problem with varying brightness. [Jordan] noticed this issue when clouds would pass over while he was taking an image. To fix this problem, he added an ambient light sensor. The Arduino can detect the amount of overall ambient light and then adjust each reading to compensate. He says it’s not perfect but the results are still an improvement. Maybe next time he can try it in color.
I once tried this using a Ultrasonic sensor, but the cone was to big, so I couldn’t use to perceive depth, which was what I was after…
Same here! Wasn’t that great of a success… I want to try if I can find an optical way to sense distance and use that instead… (because lasers are cool)
You could try the “dambusters*” method using different colored lasers that converge to a point. You can then find distance by imaging the lateral spot difference. You need two colors (or pulsing them at different times if you’re using a single color) so that you can tell whether the surface is in front of or behind the convergence point . If the order is reversed, the surface is behind the convergence point. Lateral distance between the points is proportional to distance to the camera.
____
*Refers to WWII “dambusters” project that required a precise bomb-drop height over water – converging lights shined downward from the aircraft, and when the projected spots met they were at optimal height. Obviously if the beams reversed here, you’d be underwater.
Or get a Infrared proximity sensor made by Sharp, they send out an IR beam at an angle. And have a sensor that measures where the spot hits an object by checking.
That reminds me of the method cameras use to focus: Which has a similar goal of course, namely determining the depth of a target.
https://en.wikipedia.org/wiki/Autofocus
I’ve had my eye on these “LIDAR Lite” sensors: https://www.sparkfun.com/products/13167
“Maybe next time he can try it in color.”
Why not a scanning hyperspectral camera? That’s the one thing RGB chips won’t give you.
So see the chemical elements in the photo. one pixel at a time, interesting idea.
A hyper-spectral camera won’t give you that information. You’re thinking of a spectrophotometer, but that is an interesting idea also. An imaging spectrometer or spectrophotometer would be pretty cool.
That’s really cool especially when you see the video of it in action. Unfortunately though, as a lot of Arduino projects seem to be, there is very little attention given to actually writing proper code. It is essentially a code dump, not a single line of comments. C’mon, why? Additionally there is only one function that, again, has no comments or documentation stating what it expects and what should be returned.
Furthermore, the function I can only assume is some specific algorithm to the project, but again, not a clue, because the variable naming is also terrible.
There is also a lot of repeated code that would be much better to have in a function. I could go on but you get the idea.
I agree, code could be a lot better. However .. in situations like this putting all the code to scan a line with small loops rolled out and functions removed scanning can be sped up. What i am curious about is if this setup has advantages, like being able to scan at very low light levels, or being more sensitive for infrared .. as it might not have lenses that filter out infrared.
I think that modern compilers will automatically fully implement loop-unrolling and function in-lining if you compile for speed (-O3).
The bottleneck is how fast you can position the sensor which is easily multiple orders of magnitudes of the overhead for properly structured code. Using comments and sensible variable names would not change the compiled code.
“Build a system that even a fool can use, and only a fool will want to use.” -George Bernard Shaw
Arduino is a framework for people writing quick & dirty code and you’ll expect dirty code.
Even though I mostly use PICs, I find that Arduino has its place. I like PICs (and use them a lot at home) as they have a very wide diversity (from little 6 pin microcontrollers up to 100+ pin 32 bit processors) and are much cheaper than Arduinos. But at work, where I just want to put together a setup that works with a little coding, Arduino is the platform of choice. I could use PICs but then the majority of others working on the system would not know how to edit the code. Simply put, the performance requirements are not enough to warrant using higher end platforms and in small volumes, time savings trumps material cost.
Thank you for your comments
Currently the project is just a test done for fun. And indeed, the code is not very clean and documented.
It is the strength of the Arduino. We code and we test. This is often not clean but we know if its working or not.
But I am still in full testing and modification of code, I promise I will cleaning the code.
Simply add another servo with a color wheel in front. Really easy to make it color.
This would also take much more time to do the full scan.
I will perhaps make a color version, but I probably will use 3 tube and photodiode with color filter. I will capture the 3 RGB value simultaneously.
have you looked at the Intersil ISL29125?
After the HaD article last week on the Russian guys pinhole camera I ordered one, @AUS$10.25 it looked like a good solution.
https://www.sparkfun.com/products/12829
the other one to look at is the TSL2561, which does IR and visible light
https://www.sparkfun.com/products/12055
If you want to see the world the way bees or other insects do, the ML8511
https://www.sparkfun.com/products/12705
The ISL29125 and TSL2561 are both I2C devices, the ML8511 is a simple phototransistor like the one you are using, so you should be able to swap it out with the sensor you are using now.
I will do my image saving in PGM format for greyscale and PPM for colour, being a text based, ASCII format an Arduino can do it pretty easily.
http://en.wikipedia.org/wiki/Netpbm_format
You already have the necessary info in your sketch.
I’ve been messing around all week designing a cartesian platform to put in a box and it’s kinda, sorta coming together, but as I already have a DFRobot pan/tilt kit I’m going to try your way!
https://www.youtube.com/watch?v=4gjMWF7XFtw
I must have missed something. Yea, cool single pixel camera. But dude… That view?! That view deserves a real camera.
The same field with a “real” camera:
Oups! the picture: http://chynehome.com/web/wp-content/uploads/2015/01/IMG_2984.jpg
Looks like the inherent fuzzines of the sample area (its an area not a point) could be used to determine an approx kernel size for a sharpness filter to run after sampling.
This would approximate the point sampling and give a sharper image.
Try unsharp filter of various kernel sizes.
There’s another version of the single pixel camera, where you project the scenery onto a random pattern with a lens and measure how much light is reflected off of it. When the picture features line up with the pattern features, they reflect more light and vice versa, so you’re effectively measuring how similiar the view is to the pattern.
The pattern is recorded, and the amount of light reflected works as a summing weight for the pattern so that when you change the pattern many times, and sum up many such patterns, the image starts to pop out from the noise.
As for moving the sensor – a better system would be to use a pair of discs which form a scanning aperture, so you’re essentially building a pinhole camera with a moving pinhole. It’s mechanically simpler and can be made to operate so fast you could even shoot video with it – a mechanical television.
Mind elaborating a little more? Maybe a keyword I can use to do a search on this or a similar techique?
For the moving disk setup, the single disk version is called a Nipkow disk. A good search term for the random pattern thing seems to be “single pixel compressed sensing”.
Would your disc system not require a lens to get the light to the sensor though?
And then you get lens issues.
No. Pinhole camera. Look up the Nipkow disk.
cool, sounds like it would mate nicely with SSTV radio-image standard