When you think of image processing, you probably don’t think of the Arduino. [Jan Gromes] did, though. Using a camera and an Arduino Mega, [Jan] was able to decode input from an Arduino-connected camera into raw image data. We aren’t sure about [Jan’s] use case, but we can think of lots of reasons you might want to know what is hiding inside a compressed JPEG from the camera.
The Mega is key, because–as you might expect–you need plenty of memory to deal with photos. There is also an SD card for auxiliary storage. The camera code is straightforward and saves the image to the SD card. The interesting part is the decoding.
The use case mentioned in the post is sending image data across a potentially lossy communication channel. Because JPEG is compressed in a lossy way, losing some part of a JPEG will likely render it useless. But sending raw image data means that lost or wrong data will just cause visual artifacts (think snow on an old TV screen) and your brain is pretty good at interpreting lossy images like that.
Just to test that theory, we took one of [Joe Kim’s] illustrations, saved it as a JPEG and corrupted just a few bytes in a single spot in it. You can see the before (left) and after (right) picture below. You can make it out, but the effect of just a few bytes in one spot is far-reaching, as you can see.
The code uses a library that returns 16-bit RGB images. The library was meant for displaying images on a screen, but then again it doesn’t really know what you are doing with the results. It isn’t hard to imagine using the data to detect a specific color, find edges in the image, detect motion, and other simple tasks.
Sending the uncompressed image data might be good for error resilience, but it isn’t good for impatient people. At 115,200 baud, [Jan] says it takes about a minute to move a raw picture from the Arduino to a PC.
We’ve seen the Arduino handle a single pixel at a time. Even in color. The Arduino might not be your first choice for an image processing platform, but clearly, you can do some things with it.
I like the idea of being able to get to the decompressed image data for processing reasons, but wouldn’t forward error correction be a better approach to the lossy channel than streaming decompressed data? I’d love to see a convolutional code library (or even a block code). (Hmmm, maybe yet another side project….)
If the data is a stream off a camera that mostly shows the same content then you can compute a large shared set of coefficients and only send a reference to them for each image section, that way at worst you have a single area out of place for each corrupt value. You can have intelligent error correction too, that finds the flipped bit based on which image block would fit better, no sharp unnatural transitions between pixels on image block boundaries.
seems like stuffing YMODEM on there would’ve been simpler/faster.
If you’ve got a two way channel, and can spare the time with retries, then definitely so. If its a one-way channel or broadcast medium, then that will be a challenge. I don’t see a link to the project, so who knows?
However I think the ‘send it uncompressed instead’ is maybe a bit too simplistic, and it doesn’t address the underlying cause of the data loss. But hey if Jan is happy with the results, then ‘OK’.
I was mainly interested in whether it is actually possible to decode jpeg on Arduino and then do something with the data. The idea behind sending uncompressed data was that if there is a camera, and the pictures would only be seen by a human, then this is probably the simplest solution – although far from effective.
Yes the decoding capability itself is definitely interesting. I don’t see any project links in the article, though. :(
That’s mainly because the main project is nowhere near complete right now :) When it’s finished though, it will be published as article on the site, so just keep an eye out ;)
sweet, I could possibly use the decoder lib to front some image processing
Right, I would have scrapped that whole section and just replace it with “Why? BECAUSE WE CAN!”
This reminded me a lot of the Hellscriber https://hackaday.com/2015/12/30/messages-from-hell-human-signal-processing/ — Same idea… send words as pictures and your brain can excuse the occasional mess up.
Awesome article! I’d never heard of that device.
Of course, Shannon-Harley still rules in the end — trading off bandwidth (for the images) in exchange for S/N (the cause of dropouts). But Rudolf Hell seems to have beeny on the right track intuitively — just not quite the right abstraction, and not quite the right math. Would Shannon have not been moved to do his foundational work had it not been for the war effort, and his employer?
There’s this high speed SSDV work that’s going on in amateur radio circles. Somehow they are transmitting in a way that some errors won’t result in a totally undecodeable jpeg file. http://www.rowetel.com/?p=5344
Add periodic restart markers. Reduces compression a little bit, but it is much more robuust
Not sure why the world at large has not shifted over to PNG or another lossless (but compressible) format? Yes, the files are slightly larger but you don’t have to deal with all of the issues that compressed JPEG files have. Maybe people just prefer “good enough” more often than not?
For more details, take a look at this discussion that goes a bit more into the details.
https://superuser.com/questions/845394/how-is-png-lossless-given-that-it-has-a-compression-parameter
I’m not sure that changing to lossless will improve the original problem: that being that data dropouts cause undecodability — it will happen in both PNG or JPG (maybe even worse in PNG). The problem is not the compression, but rather that subsequent data is predicated upon previous data, and thus any errors propagate forward during decoding. How bad this, is is a function of the scheme.
As for why the world has not switched to PNG instead of JPG, it’s because they have different qualities. For example, when I post logs on this site, I use JPG for photos, because I get about an order of magnitude size reduction relative to PNG of the same image with no significant perceptual loss in quality. But for screenshots of, say, an oscilloscope, I use PNG, because it does compress very well on those kinds of images, and JPG introduces artifacts around the sharp edges in those images.
Is there an image format that is compressible but also doesn’t rely on data being received fully intact and sequential? You can’t recreate a fully compressed image with lossy data but I was thinking more of another method of resolving transmitting data when it is not perfectly received? Similar to image data sent from very long range telescopes and such? This feels like a solved problem but I can’t think of any specific examples.
Does PNG depend on data being received sequentially? Is there a “torrent like” image format that fills in image data as it is received and is more resilient to being lossy?
It might bump up the file size some and maybe this is better resolved with better CRC or other data reception confirmation but
I understand the differences in file size and wholeheartedly agree that JPEG is smaller, particularly for humans. I guess my point is more of why do we care about a single order of magnitude that much? Being concerned about going from 50k to 500k of a filesize when the world has moved to terabytes of data and gigabit fiber just feels largely unnecessary in most cases? Just seems like both computationally as well as data loss wise that it seems like the benefit of small data sizes is becoming increasingly less important now than when everybody had 9600 baud modems and dial up.
JPEG is still very important: a “single order of magnitude” may not sound important when you compare 50k to 500k, but when you compare the size of the average JPEG image you take using a smartphone these days, that’s around 4MB (already JPEG compressed, at 12MP). 4MB vs 40MB suddenly makes it a lot more important, especially when you realize the average smartphone has about 16 to 32GB of storage space, a lot of which is used for apps, etc. By using JPEG, you can store around 4000 images in 16GB. Would they be 40MB each, you can only store 400.
Yes, we don’t use 9600 baud modems anymore, but instead we use 12MP images, so JPEG makes as much sense as any time (although better alternatives exist these days, like BPG, but they fail to catch on due to the massive (hardware) support for JPEG). For movies, where the compression is even more important, new codecs are still being developed and released (and followed) (H264, H265, etc. all based more or less on the same compression ideas popularized with JPEG).
Oh, I use 9600 baud modems all the time. Welcome to IoT….
To your first question, I am not aware of such a format specifically, but that certainly doesn’t mean there isn’t one or one can’t be made. As a somewhat klunky example, consider this: I’m sure you’ve watched TV, and seen where the image goes all catywumpus and pixellated for a few moments, and then snaps back to normal. That is because the mpeg stream periodically sends full frames that are /not/ predicated on previous frames. It’s bad for compression, so it’s only done every so often, but good for keeping the image stream on track when there are dropouts. (The audio file formats have similar properties things.) This is in the time domain, though, not the spatial domain you’re talking about here.
Anyway, making a format that is robust against data corruption is totally doable, but that’s generally a separate problem from compression, and is part of coding theory. I only say ‘generally’ to cover the trivial case of periodically rebooting the compressor and accepting dropouts, like the mpeg example above, otherwise I’d say ‘always’.
Compression and Coding are two sides of the same coin:
* compression removes extra data that the model can predict, and therefor does not need to communicate. It can do this by clever encoding (lossless), or by deciding to throw stuff out altogether (lossless)
* error detection/correction coding works by adding in extra data so that you can at least determine if what you got is wrong (detection), and by carefully designing it, be able to compute what the damaged data would have been had you received it correctly.
As another klunky (but less so) example, consider CDs: the raw error rate on CDs even fresh from the factory is really bad: about one in a couple hundred bits. But with adding Reed-Solomon code, the error rate is improved to one in several million bits. Yay for math and long division!
I hope they keep JPEG for the ads and switch to PNG for the webcomics.
What’s a good choice for an image processing microcontroller these days? One would think it would have continued to evolve but it feels like Arduino is not really the right platform as it needs far more bandwidth for basically everything image related. Plus, the data transfer, display update bandwidth, raw CPU, physical storage, etc have all failed to scale up over the years despite technology in general doing so.
Fpga could do it without problem except price$$$
I would think any of the ARM processors (like the STM32 Blue Pill we covered a while back) would have enough horsepower for nearly anything.
I’m just speaking through my hat here, and welcome to correction (if said correction is Firm, but Fair B^)
It isn’t a micro-controller, but neither is an Arduino, per se, the Raspberry Pi has versions that handle a camera.
Some of the STM32F4 (such as the 69/79) likely can, since they have a MIPI camera interface. (Though I am inexperienced with them. But they’re about $15, you probably want cheap.
Ah, the STM32F446 cost $7 at Mouser and has a DCMI camera interface.
You may want to check out the JeVois quad core camera at http://jevois.org
When are people gonna forget about a PC regarding the Arduino?
If youre using a computer you might aswell connect the hardware directly?
Ah, literally stumbled on this project other night, it’s a good one. Ran into the issue of jpg compression when I’m trying to send a file one-way over a serial port (and ultimately thru an opto-isolator as a comp. security project). Should’ve known…I wanted to be able to scan an SD card and send each file over on it, but I’m afraid I’ll have to scale it back to likely one huge text or csv file…meh.
Is there a good way to attempt to get corrupted images back? Maybe a
program/github project that jumbles-up bits until the image looks better using feedback from a human viewer or similarities to a set of images.
Say one took a hundred pictures of the landscape using a tripod-mounted camera; there should be a lot of attributes that could be used to recover from any data glitches.
This is not true: “Because JPEG is compressed in a lossy way, losing some part of a JPEG will likely render it useless.” It’s true that losing some part of it will lead to bad reconstruction, but that’s not due to the lossiness of the format. It’s due to the encoding. There are lossless formats that suffer the same dependence.
Hello.
Stumbled on a video youtu.be/WGMz92CeH_E in which the tft display depicts JPG files recorded on a micro SD card. And all this is done with the help of Arduino uno. I wonder how they managed to achieve such a speed of rendering?