A Single Pixel Digital Camera With Arduino


[Jordan] managed to cobble together his own version of a low resolution digital camera using just a few components. The image generated is pretty low resolution and is only in grey scale, but it’s pretty impressive what can be done with some basic hardware.

The heart of the camera is the image sensor. Most consumer digital cameras have tons of tiny receptors all jammed into the sensor. This allows for a larger resolution image, capturing more detail in a smaller space. Unfortunately this also usually means a higher price tag. [Jordan’s] sensor includes just a single pixel. The sensor is really just an infrared photodiode inside of a tube. The diode is connected to an analog input pin on an Arduino. The sensor can be pointed at an object, and the Arduino can sense the brightness of that one point.

In order to compile an actual image, [Jordan] needs to obtain readings of multiple points. Most cameras do this using the large array of pixels. Since [Jordan’s] camera only has a single pixel, he has to move it around and take each reading one at a time. To accomplish this, the Arduino is hooked up to two servo motors. This allows the sensor to be aimed horizontally and vertically. The Arduino slowly scans the sensor in a grid, taking readings along the way. A Processing application then takes each reading and compiles the final image.

Since this camera compiles an image so slowly, it sometimes has a problem with varying brightness. [Jordan] noticed this issue when clouds would pass over while he was taking an image. To fix this problem, he added an ambient light sensor. The Arduino can detect the amount of overall ambient light and then adjust each reading to compensate. He says it’s not perfect but the results are still an improvement. Maybe next time he can try it in color.

29 thoughts on “A Single Pixel Digital Camera With Arduino

      1. You could try the “dambusters*” method using different colored lasers that converge to a point. You can then find distance by imaging the lateral spot difference. You need two colors (or pulsing them at different times if you’re using a single color) so that you can tell whether the surface is in front of or behind the convergence point . If the order is reversed, the surface is behind the convergence point. Lateral distance between the points is proportional to distance to the camera.
        *Refers to WWII “dambusters” project that required a precise bomb-drop height over water – converging lights shined downward from the aircraft, and when the projected spots met they were at optimal height. Obviously if the beams reversed here, you’d be underwater.

      1. A hyper-spectral camera won’t give you that information. You’re thinking of a spectrophotometer, but that is an interesting idea also. An imaging spectrometer or spectrophotometer would be pretty cool.

  1. That’s really cool especially when you see the video of it in action. Unfortunately though, as a lot of Arduino projects seem to be, there is very little attention given to actually writing proper code. It is essentially a code dump, not a single line of comments. C’mon, why? Additionally there is only one function that, again, has no comments or documentation stating what it expects and what should be returned.

    Furthermore, the function I can only assume is some specific algorithm to the project, but again, not a clue, because the variable naming is also terrible.

    There is also a lot of repeated code that would be much better to have in a function. I could go on but you get the idea.

    1. I agree, code could be a lot better. However .. in situations like this putting all the code to scan a line with small loops rolled out and functions removed scanning can be sped up. What i am curious about is if this setup has advantages, like being able to scan at very low light levels, or being more sensitive for infrared .. as it might not have lenses that filter out infrared.

    2. The bottleneck is how fast you can position the sensor which is easily multiple orders of magnitudes of the overhead for properly structured code. Using comments and sensible variable names would not change the compiled code.

      “Build a system that even a fool can use, and only a fool will want to use.” -George Bernard Shaw
      Arduino is a framework for people writing quick & dirty code and you’ll expect dirty code.

      1. Even though I mostly use PICs, I find that Arduino has its place. I like PICs (and use them a lot at home) as they have a very wide diversity (from little 6 pin microcontrollers up to 100+ pin 32 bit processors) and are much cheaper than Arduinos. But at work, where I just want to put together a setup that works with a little coding, Arduino is the platform of choice. I could use PICs but then the majority of others working on the system would not know how to edit the code. Simply put, the performance requirements are not enough to warrant using higher end platforms and in small volumes, time savings trumps material cost.

    3. Thank you for your comments
      Currently the project is just a test done for fun. And indeed, the code is not very clean and documented.
      It is the strength of the Arduino. We code and we test. This is often not clean but we know if its working or not.
      But I am still in full testing and modification of code, I promise I will cleaning the code.

      1. have you looked at the Intersil ISL29125?
        After the HaD article last week on the Russian guys pinhole camera I ordered one, @AUS$10.25 it looked like a good solution.

        the other one to look at is the TSL2561, which does IR and visible light

        If you want to see the world the way bees or other insects do, the ML8511

        The ISL29125 and TSL2561 are both I2C devices, the ML8511 is a simple phototransistor like the one you are using, so you should be able to swap it out with the sensor you are using now.

        I will do my image saving in PGM format for greyscale and PPM for colour, being a text based, ASCII format an Arduino can do it pretty easily.

        You already have the necessary info in your sketch.

        I’ve been messing around all week designing a cartesian platform to put in a box and it’s kinda, sorta coming together, but as I already have a DFRobot pan/tilt kit I’m going to try your way!

  2. Looks like the inherent fuzzines of the sample area (its an area not a point) could be used to determine an approx kernel size for a sharpness filter to run after sampling.
    This would approximate the point sampling and give a sharper image.
    Try unsharp filter of various kernel sizes.

  3. There’s another version of the single pixel camera, where you project the scenery onto a random pattern with a lens and measure how much light is reflected off of it. When the picture features line up with the pattern features, they reflect more light and vice versa, so you’re effectively measuring how similiar the view is to the pattern.

    The pattern is recorded, and the amount of light reflected works as a summing weight for the pattern so that when you change the pattern many times, and sum up many such patterns, the image starts to pop out from the noise.

    As for moving the sensor – a better system would be to use a pair of discs which form a scanning aperture, so you’re essentially building a pinhole camera with a moving pinhole. It’s mechanically simpler and can be made to operate so fast you could even shoot video with it – a mechanical television.

      1. For the moving disk setup, the single disk version is called a Nipkow disk. A good search term for the random pattern thing seems to be “single pixel compressed sensing”.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.