Colorizing Images With The Help Of AI

The world was never black and white – we simply lacked the technology to capture it in full color. Many have experimented with techniques to take black and white images, and colorize them. [Adrian Rosebrock] decided to put an AI on the job, with impressive results.

The method involves training a Convolutional Neural Network (CNN) on a large batch of photos, which have been converted to the Lab colorspace. In this colorspace, images are made up of 3 channels – lightness, a (red-green), and b (blue-yellow). This colorspace is used as it better corresponds to the nature of the human visual system than RGB. The model is then trained such that when given a lightness channel as an input, it can predict the likely a & b channels. These can then be recombined into a colorized image, and converted back to RGB for human consumption.

It’s a technique capable of doing a decent job on a wide variety of material. Things such as grass, countryside, and ocean are particularly well dealt with, however more complicated scenes can suffer from some aberration. Regardless, it’s a useful technique, and far less tedious than manual methods.

CNNs are doing other great things too, from naming tomatoes to helping out with home automation. Video after the break.


15 thoughts on “Colorizing Images With The Help Of AI

    1. Looking at the example image above, I have to ask:

      1) I’ve never seen red jeans for sale anywhere, and who wears bright red jeans? Is red the most likely color of jeans? Did the input data have an overwhelming number of red-jeans wearers?

      2) I’ve never seen a beagle with yellow/orange chest, only a white chest. Is yellow/orange the most likely color of the dog’s chest?

      3) The yellow/orange dog casts a yellow/orange shadow?

      4) The helicopter in Jurassic Park is blue and white, not yellow and red.

      5) The algorithm miscolors the pink shirt and red-ish head of the pilot.

      Basically, the algorithm makes everything of high luminance (whites) yellow, and low luminance (darks) red. I don’t see this as anything but an algorithm that doesn’t work.

  1. I wonder if the difference between the historic B&W pictures and a color picture that’s been de-colorized is causing problems. It’s usually pretty obvious when looking at a picture whether it’s a true B&W or just run through a digital filter. I assume the old B&W film reacted to certain wavelengths more strongly, causing that difference.

    1. B&W film, old or new, responds to different wavelengths more than others. It depends on the type of film, and using coloured filters is common, e.g. to darken the sky and reduce overall contrast, or to bring out the clouds, or to emphasise foliage, etc.
      Any photographer working in B&W with a digital camera will be making the same choice today, just in post-production; software like lightroom allows you to alter the response to different wavelengths when you do the conversion.
      The only B&W photos produced with a ‘flat’ wavelength response are produced by a simple desaturation to luminance. This isn’t commonly done by photographers, but is done by programmers who don’t realise the nuances of B&W photos. It doesn’t usually look as nice (largely because no thought has gone into it).

      However, your ability to spot old photos is probably based more on the grain, contrast, and other factors like the subject.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.