AI And Savvy Marketing Create Dubious Moon Photos

Taking a high-resolution photo of the moon is a surprisingly difficult task. Not only is a long enough lens required, but the camera typically needs to be mounted on a tracking system of some kind, as the moon moves too fast for the long exposure times needed. That’s why plenty were skeptical of Samsung’s claims that their latest smart phone cameras could actually photograph this celestial body with any degree of detail. It turns out that this skepticism might be warranted.

Samsung’s marketing department is claiming that this phone is using artificial intelligence to improve photos, which should quickly raise a red flag for anyone technically minded. [ibreakphotos] wanted to put this to the test rather than speculate, so a high-resolution image of the moon was modified in such a way that most of the fine detail of the image was lost. Displaying this image on a monitor, standing across the room, and using the smartphone in question reveals details in the image that can’t possibly be there.

The image that accompanies this post shows the two images side-by-side for those skeptical of these claims, but from what we can tell it looks like this is essentially an AI system copy-pasting the moon into images it thinks are of the moon itself. The AI also seems to need something more moon-like than a ping pong ball to trigger the detail overlay too, as other tests appear to debunk a more simplified overlay theory. It seems like using this system, though, is doing about the same thing that this AI camera does to take pictures of various common objects.

48 thoughts on “AI And Savvy Marketing Create Dubious Moon Photos

    1. If you live in the internet (a shockingly high percentage of people do today, not just the phenotypical nerds you might expect) then most of your environmental information today is already falsified by things which could be termed AI of some form.
      The defining characteristic of the future will be an impossibility to discern what is real. This will be considered natural and people will retcon all of history in order to believe what they must: that it was always this way.

      1. That’s not anything new. What is new is that for a brief moment in the past 20-30 years, most of the world was more or less in agreement about what is going on, except in Russia, or China, or in North Korea…

        Ask a Brazilian who flew the first powered airplane.

    1. Interesting article. One thing that wasn’t covered though is what these ‘up to 20’ photos are. My suspicion is that their novelty is actively driving the anti-shake mechanism to get ‘subpixel’ differences (on top of the usual exposure etc variations) and the AI is then compositing those all back together.

      1. That’s exactly how phone camera processing works across the industry. Hand shake is used as the jitter source for superresoluton capture (some interchangeable lens cameras can do a similar trick, but that use the IBIS actuators to perform precise subpixel sensor shifts rather than accept random shifts and cherrypick captures).

      2. Read more carefully before calling “complete BS” please. Or are you accusing the person in the reddit thread of faking their test images?

        Because the tests are pretty well designed, and if the results are as reported, I think they are conclusive.

        Taking photos of an intentionally blurred moon image on the monitor lead to detail that wasn’t there in the original. Taking a picture of the moon with a grey square overlaid on the moon and the “sky” ended up with texture only added to the grey square in the moon. Taking a picture (in our headline image) of a full moon and a cropped moon leads to “upscaling” of the full moon but not the cropped one.

        If this person is faking their tests and results, then all bets are off, of course. But if you have this phone, everything is there in the post for you to replicate it.

        That last test strongly suggests that Samsung is running a two-stage model that detects the moon first, and then “fixes” it after it knows what it’s fixing.

        Many of the debunkings in that Inverse article have science wrong. Especially laughable is “I took a photo of a crescent of garlic, and it wasn’t upscaled”. Becasue, first off a non-result isn’t proof that there’s no effect. And second it’s pretty darn clear that shouldn’t work anyway.

        The funniest reality would be if the Samsung PR folks didn’t even understand what the AI-imaging folks were doing, but my guess is that there’s some intentional walking-the-line about what goes in and what comes out.

        Good articles on Hackaday:
        https://hackaday.com/2020/11/25/enhance-is-now-a-thing-but-dont-believe-what-you-see/
        https://hackaday.com/2022/03/28/is-the-iphone-camera-too-smart-or-not-smart-enough/
        and one that we used as a source for the latter:
        https://lux.camera/iphone-13-pro-camera-app-intelligent-photography/

        Also, why they’re doing this whole AI image processing stuff:
        https://hackaday.com/2022/06/28/lenses-from-fire-starters-to-smart-phones-and-vr/
        TL; DR: They’re pushing the limits of small plastic lenses as far as they can, optically. The only place left to go is image processing.

        This is a really cool topic, and also a multi-billion dollar industry ATM.

        1. I’d say “conclusive” feels a bit strong given the data, although I guess conclusive is more subjective these days. Also: https://xkcd.com/2268/

          Out of curiosity, did HaD reproduce any of the experiments?

          It is a cool topic, with many aspects. I’m still waiting from my Blade Runner ‘Enhance Image’ type tools.

      3. That article doesn’t understand what it’s talking about, and assumes that it’s only fake if there’s an actual static overlay.

        But that would be very primitive. Samsung are using an AI model trained on photos of the moon to add detail into the photo.

        There won’t be a static image of a moon; the “images” of the moon are in the AI model used to enhance the moon.

        The big smoking gun is also right there in the article. The Samsung beats a proper 600mm lens. That requires adding detail from the AI model that’s simply not present on the Samsung sensor.

        1. Exactly this. It’s not just slapping a moon image on, and it’s not magically improving detail that isn’t there, but they aren’t the only two options.

          When the linked article pointed out that the Samsung phone beat out an objectively higher-spec camera, you’d think they stopped there. It seemed to me after that point that they really wanted the Samsung camera to “win” and were blind to what was actually in front of them in their results.

          1. What’s the difference though? The fact that the AI can generate an image out of a model, which it then slaps on the photo is virtually the same as using a photo overlay.

            Think about it. Suppose you were to perform this trick without AI. You would take a 3D sphere and cover it with a relatively high resolution moon texture, then use some algorithm to detect the orientation of the moon in the image – using the time of day, GPS location, etc. – then render a suitable overlay and blend it in for color matching.

            That’s not just a static overlay either, but the difference is a moot point. That is essentially what the AI is doing anyhow, only, the algorithm is a black box so they have plausible deniability since nobody really knows the ML model works.

          2. So if I’ve got this right, you are telling us how the algorithm works, and that nobody knows how the algorithm works. The algorithm couldn’t have said it better.

          3. Yep.

            We know the algorithm must pick up -something- from the data set because it can’t generate new data out of nothing. Otherwise it would just pass through whatever random noise you put in. We just don’t know what it picks up and how, because we aren’t looking, so we pretend that it isn’t just passing inputs to outputs through a set of convoluted rules.

      4. “is actively driving the anti-shake mechanism to get ‘subpixel’ differences”

        Subpixel differences only helps with quantization effects. Doesn’t help with diffraction – once you’ve digitized it, you’ve lost all phase information so the multiple shots don’t help. Given the ridiculously tiny lens and relatively huge sensor, subpixel differences won’t help.

        I mean, that lens is what, 10x the diameter?

        I don’t get why this is a surprising thing – they flat out say they’re using AI and machine learning, and of course machine learning hallucinates features. The information just isn’t there. It’s well known, that’s why it’s not safe to use in, say, the medical field.

        I mean, if you really want to see it go bonkers, just give it features that aren’t in the training set, like timing it to capture an image of the Moon when the ISS is transiting. That’d be hilarious. Or taking a photo when Saturn’s in conjunction. It’d be like “here’s this ultra-sharp moon image and WTF IS THAT THING BESIDE IT.”

    2. That debunks the much less believable AlexTV claims, which apparently were presented in a tweet that has since been removed. The reddit post linked here is more plausible, though of course it also would be easy to fake by the poster.

    3. Your article is saying just that; the 100x zoom only works for scenes the software has been trained for. Of course, it’s not quite slapping a texture over moon-ish object, but it’s not quite an honest zoom with no trickery involved.

    1. Gruyère is the best training dataset IMHO. As a noteworthy point, it takes much less space as other training datasets. Because as the proverb says, “Gruyère has many holes, if there is more Gruyère then there are more holes, if there are more holes then there is less gruyere, therefore the more Gruyère you have, the less Gruyère you have.” Which means that the larger your training dataset is, the smaller it is.

  1. It’s pretty clearly not a cut&paste thing.

    However, superresolution algorithms in various forms will easily upgrade a blurry moon to a sharp one. This will add detail that’s baked into the algorithm/network, though, so the accusation that the camera is adding detail that’s not there is real. I’ve used similar algorithms to upscale old blurry photos to surprising levels.

    The article linked to by the first commenter is good evidence that data is being inserted. The shot taken with the Sony A7 is likely all you can expect thanks to his camera setup and atmospheric conditions, regardless of resolution. So the Samsung’s level of sharpness makes it pretty obvious that the camera recognized the image as the moon and run it through a moon-specific superresolution algorithm.

    Either way, these algos will become more common place as the cost of creating them goes down.

    1. “However, superresolution algorithms in various forms will easily upgrade a blurry moon to a sharp one.”

      Using a super-resolution algorithm on a unique known object might as well be Photoshop, though. If you think about it, it doesn’t make any sense.

      At least in astronomy, if you think about the simplest superresolution techniques, you start off by saying, for instance, “I know I have two binary stars, which are both point sources, so I can measure separations below traditional Airy limits by just modeling the point-spread function and comparing the diffraction pattern as a function of separation.”

      That makes sense. You have a model of an object (a point source), but now you know you have *two* of them, so you find the separation based on parameterizing the image. You’re applying knowledge of a *known* source (a point source) to an *unknown* image which is similar, but not the same (there’s two of them), and gaining information from that knowledge.

      In this case, though… it’s the same, unique object. On the scales of these images and the timeline of the camera, it doesn’t change. Ever. It’s exactly the same. This wouldn’t work for Mars. Or Jupiter. Or Saturn. So what the heck is the point? You might as well just be substituting in someone else’s photo for the Moon. It doesn’t actually *gain* anything.

      Like, what is it actually using the image data that it gets *for*? To determine moon phase and orientation? You could do that with a friggin’ calendar and GPS.

      1. “So what the heck is the point? You might as well just be substituting in someone else’s photo for the Moon.”

        That’s exactly the point! NASA has plenty of awesome pictures of the moon, and they’re public domain to boot. Why not blend them in with your picture of the moon? Problem solved!

        Seriously, I’d love to know exactly what the (probably) short list of objects are that get special treatment in Samsung’s processing.

        1. I would use a large dataset of high quality images, edit then to add the limitations of a smartphone camera, and then train a model to output something close to the high quality images corresponding to the low quality ones it gets as input. The moon only gets special treatment in that case because it’s something people take pictures of a lot (and maybe the devs on this ML project checked that the moon looks good).

          1. That’s the basic idea behind superresolution algorithms. You teach them on narrow (and sometimes broad) subsets of image information, and they insert details where something looks familiar enough that they think it fits.

            I’ve trained nets with close up photos of family members, family buildings, and the like, and then run the algo on old slightly blurry photos. The results were sometimes striking, and that’s for something I did only as an aside during an enforced quarantine.

          2. “The moon only gets special treatment in that case because it’s something people take pictures of a lot”

            The Moon is special because it’s the same. On the scale of these photos, it doesn’t change. There’s literally nothing else like it.

            Everything else you take pictures of, machine learning’s a guess. People, buildings, landscapes – all of those have temporal detail that machine learning can’t have in its dataset unless a better version of your picture’s already available. It might be minor differences, but they’re real.

            Train a dataset with pictures of the Moon in all phases and you’ve got all the information you need. That’s the difficulty here, because you can be *extremely* aggressive adding info to the Moon pictures because it’s perfect.

  2. Typically you don’t need to do long exposures to get sharp photos of the moon with a DSLR, the moon is a very bright object. I’ve taken plenty of sharp photos with a 400mm f5.6 lens, hand held.

    Rest of the article is very informative though, I was previously baffled by how good the moon came out on my s22+!

    1. That’s partly why the “we stack up to 20 images” makes so little sense to me. It’s the Moon. You don’t need photons. And you’ve got greater than a 100MP sensor. You don’t even need sub-pixel information, the thing’s overresolved. You’re already tracing out the diffraction pattern as it is.

    2. Facts:
      1) The moon receives very nearly the same solar radiation as the Earth does.
      2) The moon’s albedo (the amount of incident light it reflects) is about the same as dusty asphalt, so if your camera can take a picture of a street in broad daylight, it can take a picture of the moon without “long exposures”.

  3. Well, when they sell AI enhanced camera, it’s not to surprizing and actually quite accurate to have AI stitch new pictures from your poor shots… what would you expect otherwise?

    Pretty sure this is a bad news for photographer but some cheats to sell more phone? what would AI enhanced picture be otherwise?

    1. All new technologies seem to have their roots in pr0n or one of its offshoots. VHS. Streaming. 3D. Virtual reality. Haptics. I bet Z’s metaverse would be a blazing success if he were to have a sin city in it.

  4. Interesting points made.
    I did read somewhere that a lot of UAP pictures have been AI processed and found to be.. surprise! mundane objects. Like planes. Or meteors. A few turned out be Jupiter.
    Supposedly in the 1950s there were some very clear pictures of something “odd” in the sky that turned out to be some super secret aircraft or other.
    These days unless you live near Area 51 and have some ridiculously expen$ive camera you’re not going to get much more than a blurred image.

  5. I don’t believe that the conclusion drawn here are correct. Blurring an image does not remove information from the image. Also, your test photo where you clipped the highlights further illustrates that something more is happening to the image. If the camera is overlaying textures of the Moon on to the image then why doesn’t it produce a a better moon photo or at least one identical to the situation where no clipping was applied.

    Instead what I think is going on here is that their using AI to determine the location of the moon in the photo and then using some type of AI driven de-convolution algorithm to de blur the photo.

    Links about deconvolution:
    https://www.olympus-lifescience.com/en/microscope-resource/primer/digitalimaging/deconvolution/deconintro/

    https://www.mathworks.com/help/images/deblurring-images-using-a-wiener-filter.html

    1. He doesn’t just blur, though. He reduces the resolution first, then blurs it. The photo comes back with higher resolution than the source image.

      Well, of the moon. It didn’t realize that it was actually taking a picture of a monitor, and should have been upscaling the Bayer pattern or whatever is on monitors when you look at them with a magnifying glass.

      But deconvolution or not, you cannot get more information into an image. You can only move it around.

Leave a Reply to BrightBlueJimCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.