Is The IPhone Camera Too Smart? Or Not Smart Enough?

What is a photograph? Technically and literally speaking, it’s a drawing (graph) of light (photo). Sentimentally speaking, it’s a moment in time, captured for all eternity, or until the medium itself rots away. Originally, these light-drawings were recorded on film that had to be developed with a chemical process, but are nowadays often captured by a digital image sensor and available for instant admiration. Anyone can take a photograph, but producing a good one requires some skill — knowing how to use the light and the camera in concert to capture an image.

Eye-Dynamic Range

The point of a camera is to preserve what the human eye sees in a single moment in space-time. This is difficult because eyes have what is described as high dynamic range. Our eyes can process many exposure levels in real time, which is why we can look at a bright sky and pick out details in the white fluffy clouds. But a camera lens can only deal with one exposure level at a time.

In the past, photographers would create high dynamic range images by taking multiple exposures of the same scene and stitching them together.Done just right, each element in the image looks as does in your mind’s eye. Done wrong, it robs the image of contrast and you end up with a murky surreal soup.

Image via KubxLab

Newer iPhone Pro cameras are attempting to do HDR, and much more, with each shot, whether the user wants it or not. It’s called computational photography — image capture and processing that uses digital computation rather than optical processes. When the user presses the shutter button, the camera creates up to nine frames, on different lenses, each with a different exposure level. Then the “Deep Fusion” feature takes the cleanest parts of each shot and stitches them together into a tapestry of lies an image with extreme high dynamic range. Specifically, the iPhone 13 Pro’s camera has three lenses and uses machine learning to automatically adjust lighting and focus. Sometimes it switches between them, sometimes it uses data from all of them. It’s arguably more software than hardware. And so what is a camera, exactly? At this point, it’s a package deal.

Tarted-Up Toddlers

Various cameras have been desired over the years for the unique effects they give to the light-drawings they produce, like the Polaroid, the Diana F, or the Brownie. The newer iPhone cameras can wear all of these hats and still bring more to the table, but is that what we want? What if it comes at the cost of control over our own creations? Whereas the early iPhones would let the user shoot in RAW mode, the newer ones obfuscate it away.

Just the other day, our own Tom Nardi received a picture from his daughter’s preschool. Nothing professional, just something one of the staff took with their phone — a practice they have been doing more often since COVID protocols are still in place. Tom was shocked to see his daughter looking lipsticked and rosy-cheeked as though she’d been made up for some child beauty pageant or a “high-fashion” photo session at the mall. In reality, there was some kind of filter in place that turned her sweet little face into 3-going-on-30.

Whether the photographer was aware that this feature-altering filter was active or not is another matter. In this case, they had forgotten the filter was on, and turned it off for the rest of the pictures. The point is, cameras shouldn’t alter reality, at least not in ways that make us uncomfortable. What’s fine for an adult is usually not meant for children, and beauty filters are definitely on this list. The ultimate issue here is the ubiquity of the iPhone — it has the power to shape the standard of ‘normal’ pictures going forward. And doing so by locking the user out of choice is a huge problem.

Bad Apples?

Whereas the Polaroid et. al recorded reality in interesting ways, the iPhone camera distorts reality in creepy ways. Users have reported that images look odd and uncanny, or over-processed. The camera considers the low light of dusk as a problem to be solved or a blemish to erase, rather than an interesting phenomena worth recording. The difference is using cameras to capture what the eye sees, versus capturing reality and turning it into some predetermined image ideal that couldn’t have been done with any one traditional camera.

You can’t fault Apple for trying to get the absolute most they can out of tiny camera lenses that aren’t really supposed to bend that way. But when the software they produce purposely distorts reality and removes the choice to see things as they really are, then we have a problem. With great power comes great responsibility and all that. In the name of smoothing out sensor noise, the camera is already doing a significant amount of guessing and painting in what it thinks is in your image. As the camera does more processing and interpreting, they will either add more controls to manage these features, or keep the interface sleek, minimalist, and streamlined, taking away control from the user.

Where does it end? If Apple got enough pressure, would they build in certain other distortions into the software? When the only control we have over a tool is the should keep striving to capture reality as the eye sees it, and not massaging it toward some ideal.

61 thoughts on “Is The IPhone Camera Too Smart? Or Not Smart Enough?

  1. As the saying goes “art is what you like”. Not everyone likes HDR. Lots of people love it. There are many “knobs” you can turn to fiddle with it. There are many flavors of HDR. The only question here is if Apple provides enough knobs, or access to the knobs, or selects defaults you like and/or can change. Photographs are not reality, so it is just a matter of whether or not you like a particular mapping of reality to image or not. More importantly that you have the ability to control the mapping.

      1. Well there is certainly a degree of that, as no tiny camera and lens can really capture the details at every range and ambient light level the way smartphones pretend to – so there is a huge amount of processing required just to make a halfway reasonable image and its always visible in the smartphone (and often webcam) camera’s output to some extent or other no matter the brand. But I don’t actually think that is entirely a bad thing – if you are going to have such high price utility electronics being able to take decent video and still images without the user needing to understand and setup fancy image post-processing is just sensible, it makes them really useful for image capture.

        DSLR type camera on the other hand need some skill to use artistically but can with how good the noob friendly more automatic modes are now capture a good and reasonably faithful image without requiring any more skill than point and click… Simply because the sensor and light capture area is vastly greater and there is an adjustable lens able to put the target properly in focus.

        1. A “camera” has two points

          1) to objectively record what is presented to the lens
          2) to make “nice pictures”

          For certain users, the first is the main point because they want to do stuff with the data: they want to try different things, extract different information and re-interpret it in various ways. In this sense, all the algorithms do not save a bad camera – they make it worse. A camera that processes the heck out of the picture is a nightmare because it destroys the data and replaces it with “art”.

          For others, they just want a nice looking picture, and they don’t care that it looks nothing like what originally came in through the lens. They want a camera that is actually a professional photographer in their pocket – who so happens to be a clone of the professional photographer in everyone else’s pockets as well.

          1. What’s funny is that I actually think my Pixel’s computationally-augmented camera is better for the “record what’s presented” use case than the “make a pretty picture” use case. Some reasons:
            1. The picture almost always won’t be good enough for print anyway. Automatic noise reduction blurs fine details and the images don’t look great in print.
            2. Artistic control is taken away. I can shoot in RAW, but the image looks bad without the computational photography extracting more information from the sensor.
            3. The phone provides no aperture control and has a fixed selection of lenses. This is fine for recording information, but an SLR with adjustable aperture and swappable lenses is better for art.
            4. The phone is always with me, ready to record events as they happen. Art can also be spontaneous, but more often than not it’s something where I can afford to grab an SLR and go looking for a subject.
            5. Autofocus, HDR, and other such features work together to collect as much info about the scene as possible, as fast as possible. Now imagine this exaggerated scenario – you are witnessing a crime, and you need to snap a photo quickly and then get away.

            SLR: You pull out the device, wait for it it to focus, snap your shot – and then oops, it turns out all the important evidence was in the shadows which turned out underexposed.

            Phone: You pull out the device, and laser AF instantly has the scene in focus. You take your shot, and HDR ensures that EVERYTHING in the scene is visible. Best of all, you don’t waste any time fiddling with settings.

            This is obviously an extreme scenario, but variations of it occur often. I was recently looking after some children and one ran at me to tackle me. I had enough time to pull out my phone, snap a picture, and put it back in my pocket before I was attacked by the adorable child. Boom, memory captured.

          2. >“record what’s presented” use case

            Information here doesn’t mean “documentation”, as in taking a picture proof of something.

            It’s kinda like a hissing analog tape versus a CD with compressed audio. The tape contains more information although it has noise artifacts, versus the CD which doesn’t contain the noise but lacks the information because it’s been filtered to “sound good” by some psycho-acoustic parameters. The tape can be filtered to sound “just as good” by a process which discards information and “blurs the details” while adding some psychometric artifacts and effects, but what you’d really want is a non-compressed CD with all the information and none of the noise.

            However, if your camera is only equivalent to the hissy wobbly analog tape recorder, you can never achieve that. You can only emulate it by things like image stacking, which in itself is no longer “what is there” but re-imagining the picture by several slightly different versions of it from different points in time. The camera “invents” data instead of recording it.

            Recording what is present generally implies that the recording device and media are “transparent”: they do not add or subtract their own effect. In the ideal case, they simply take what is there and make a note of it perfectly as it is, not trying to add a point of view to it – like bringing up the shadows or suppressing highlights.

            Such things are then left to the photographer and the artist to decide as they “develop” the picture for viewing – but it requires that the camera has the dynamic range to capture what’s in the shadows and highlights. Computational methods can help there, but mostly it’s just used to make it “look good” which does the opposite.

          3. Or, imagine two cameras on the left and right of the subject. You take two pictures simultaneously and then make the CPU calculate a new picture from the point of view of a virtual camera in the middle.

            That new picture was never the present. It never happened, and the information within does not correspond to the object or the situation because it’s completely made up. Neither of the cameras recorded that image. You can’t trace back the information to say that photon A hit photosensor B which caused an electrical signal C which got recorded as data bit D.

            The question then becomes whether it’s at least a good representation of the original subject in the same sense as how a painter could paint a good realistic portrait – but you can never pretend the picture has anything to do with what was really there – it’s a computer rendering of an imaginary subject rather than a photograph.

    1. It’s NOT HDR. It’s just the opposite: LDR. Taking multiple exposures to capture wide dynamic range and then COMPRESSING that range into a single picture is LDR. It’s sad to see anybody making this mistake (in practice or terminology) in 2022.

  2. Nothing new. More automation means less control. My dishwasher decides how long to wash by how dirty the dishes are. My washing machine decides how much water to use by how big the load is. My car decides when to shift. I don’t own an Iphone, but I would hope there is a way to disable this feature.

  3. “Newer iPhone Pro cameras are attempting to do HDR, and much more, with each shot, whether the user wants it or not.”

    There’s the problem, right there at the end of that sentence. And arguably is the problem with Apple devices in general.

  4. iPhones do what Apple wants them to do, not the end user. As a developer, I can confirm that trying to make them do anything else, at least for normal users, is an exercise in futility. This is hardly surprising.

  5. I’ve been working recently with two people publishing photos made with their iPhones, while I was taking them in RAW format with a camera (Canon 60D) and a phone (an old LG G5). I still don’t understand why Apple doesn’t let iPhone users take RAW pictures: the average, automatic, photo of an iPhone is way better than the average photo taken with auto mode on my G5 (and even on my 60D), but whenever you want to achieve something different from the usual HDR photo, it’s very limiting. I could take very good shots from the RAW files, even though it took a bit of time of editing. Enabling RAW photos and manual mode on iPhones would make them really the best phones for photos, both for users who don’t have time/skills for editing, and for pro users who could achieve much more from their expensive phones.

      1. “Apple ProRAW combines the information of a standard RAW format along with iPhone image processing”

        WTH does that mean? It’s raw or it’s processed; it can’t be both. And ugh, Apple, it’s “raw,” not “RAW.” It’s not an acronym. Steve Jobs used to be personally blamed for Apple’s misspellings (like “90’s Music” in iTunes), but who’s to blame for new ones like this?

  6. There’s two “versions” of HDR. The first one attempts to record the dynamic variation using multiple exposures on a sensor which has limited dynamic range. The other attempts to display such information on a device with a limited dynamic range, such as an iPhone screen.

  7. i think it’s good to filter the picture until it looks good, and bad to do it until it looks bad :)

    i recently saw an example — unfortunately i don’t have it handy — where an iphone camera was doing the effect to make it look like the foreground is in focus and the background is intentionally blurred. and there was an object that was in the same focal plane as the foreground that happened to have a rectangle drawn on it. the software decided to key on the edge of that rectangle, so half of the object was “foreground” and half was “background”. even though the whole object was the same distance as the actual target of the photograph. you could look at the photo and not notice it had happened and even so it would give a something’s-not-quite-right feeling. a lot of filters are simply bad — they included it even though even superficial testing reveals that it creates an uncomfortable feeling in the viewer a lot of the time.

    everyone does it but of course apple looks worst when they do it because they make such a big noise about not releasing half-baked bug-features.

    i just wish i could get a camera that lets a person with shakey hands take decent indoor photos. my nexus 4 and nexus 5x could do it but no phone before or since (to be fair, i tend to buy cheap phones)

  8. I hadn’t expected this discussion to move in this direction, but here it is.

    If you want RAW files, you go with a Google pixel. A pretty simple answer. You can grumble all you want, but here it is:

    Can you shoot and get RAW files with an iPhone ? — no
    Can you shoot and get RAW files with an android Pixel – yes

    If the facts change, let me know.

    1. Actually I was just believing what was said in some posts above. Apparently it is possible to enable something called ProRAW on a recent iPhone and do RAW capture. I don’t know all the details of this and whether there is double talk and smoke/mirrors involved, but there you go. Others with experience should comment further.

    2. Actually, it gets ugly. Apparently ProRaw is only available on “pro” designated Iphones, in particular 12 Pro and 13 Pro. This is an all too typical stance of restricting software features that could just as well be included in the non-Pro designated phones. More mud in apples eye.

      1. Well, it’s the same reason why Sony took away RAW files in the compact RX cameras – so people would be nudged to buy the DSLRs.

        It’s the general trend of market segmentation, where you remove functions and features that are “too good” for the consumer model because they compete with the higher priced models.

        1. Though on the other hand, there’s a different expectation of what the RAW file even does. In a real camera, you’d expect it to give whatever the sensor outputs, which you then “develop”, but since the RX line of cameras is already processing the image like it was a smartphone with stacking and noise reduction, auto-bracketing, whatever, you no longer have a “raw” output that was worth anything, that you could do anything with.

          If you’re using the processor to pull all the tricks, you may as well go straight to JPEG because there’s nothing left for you to do about it. Depending on your point of view, the image is already as good as it gets, or it’s been mangled useless for further adjustment.

          1. This.
            I have a Pixel 6 pro and this has been my general experience – shooting in RAW does not really give you any benefit, as it robs you of all the computational photography that makes a mediocre sensor produce a usable image. You get your artistic control, but the image looks far worse (Note: the iPhone’s proRAW thing is supposed to be the best of both worlds, but I haven’t had any experience with it)

            Going back to RAW photography on a proper DSLR always feels like magic in comparison, because I can extract information that I didn’t even think was there.

            Note that I still love shooting with the Pixel, because it gets the job done. I shoot to capture memories, and this camera does it in a way that isn’t unpleasant to look at.

    3. The Adobe Lightroom app allows you to shoot and get RAW files with an iPhone, I imagine it does with Android phones too.

      The real problem is the iPhone Camera app has removed this ability on all but a few high-end iPhones.

  9. Any discussion of phone camera augmentation should reference [vas3k]’s article on “Computational Photography” from a few years ago:
    https://vas3k.com/blog/computational_photography/

    Phone cameras were already using heavy processing just to compensate for using tiny fixed lenses to get decent-looking photos. Customers didn’t want crappy photos, they wanted great ones “automatically” without having to do any special steps. But once the designers started to down that path…

    “After a while, the camera will start to replace the grass with greener one, your friends with better ones, and boobs with bigger ones. Or something like that. A brave new world…. Photos of the ‘objective reality’ will seem as boring as your great-grandmother’s pictures on the chair. They won’t die but become something like paper books or vinyl records — a passion of enthusiasts, who see a special deep meaning in it. ‘Who cares of setting up the lighting and composition when my phone can do the same?’. That’s our future. Sorry.”

    1. After a while, it will automatically delete people from the image as they lose popularity. Political disagreement? Poof, that disgraced fellow erased from the image (and history)with the touch of your finger!

  10. ‘The point is, cameras shouldn’t alter reality’. The point is, camera cannot present ‘unaltered’ reality, as there is no such thing as ‘objective picture’. Our eyes (and brain, of course) heavily interpret what they see, so the picture ‘exactly as I see it’ is quite altered reality. A camera that completely disregards our system of vision would produce pictures that have very little to do with how we would vew the scene. What we can and should discuss how cameras should alter reality to provide the result most acceptable by those involved.

    1. Yes! I have gone around and round with some people about this. Certain people want to believe that a camera can do something they want to describe as an “accurate rendition of reality” or some such nonsense. Once again, “art is what you like”. You should be asking if your eyes give an accurate rendition of reality.

      1. It depends on what you use the camera for.

        If you want nice pictures, or if you want the image to measure something in the form of a picture, such as for reduplication. If for example you took a picture of the Mona Lisa for purpose of making a poster out of it, you wouldn’t want the camera to creatively “re-interpret” the colors.

        1. In fact, speaking of paintings, there are hyperspectral cameras that do give an accurate rendition of reality so far as possible regardless of how you would view it. The value is that your interpretation changes according to your moods while the objective information stays the same.

          The subjective experience can be replicated out of the objective facts, while the objective facts can never be recovered from the subjective interpretation as the latter can be caused by many things. In this sense, the best camera records what is – not just what it looks like to you.

    2. The job of the camera is to record an image.

      The job of the display, or the printer, is to make it pleasing to the eye.

      These are two separate functions. The camera doesn’t determine how the picture looks like – it merely records the information necessary for such reproduction. This means a camera should record whatever is presented to the camera as accurately as possible rather than interpret the image according to how it will look like afterwards on a particular display device, such as on an iPhone screen.

      This difference in function is now blurred because the camera and the display device are assimilated together in the smartphone. People now think that the point of photography is simply to make “art”.

      1. For the point of creating art, it’s better to have objective information that you can endlessly re-interpret as it pleases you, rather than having the camera manufacturer force a point of view on it and destroy the original in the process.

        1. Ansel Adams said (para): ‘The negative is the score, the print is the performance.’

          Replace negative with raw image file for modern cameras. If your camera doesn’t support raw, it’s just a modern fixed focus 110.

          Nobody should expect a computer to do ‘art’.
          How do you spec ‘art’? ‘I’ll take one art please!’

          Modern art is just a Manhattan type circle jerk anyhow.
          Not what you know, who you blow etc.

          I blame Warhol and those who didn’t (laugh at/kick square in the nuts) the first dude claiming a urinal was ‘art’.

  11. When I see great photos, I look at them in two ways. I see a great shot, but then, part two is wondering about the processing, etc.

    I would love to just see unaltered photos sometimes. My 12-year-old photography club self would smile.

  12. My snarky comment in this context is always: you don’t need a photo sensor on your phone. With GPS, compass, IMU and a bit of sensor fusion, your phone knows what it is pointing at.

    Google street view knows how it should look.

    Problem solved ;-P

  13. A lot of pictures are referred to as HDR solely based on the look, despite having nothing in common with true HDR, at best they are low dynamic range pictures with a filter, unless both the image file and display are capable of containing/displaying the added bit depth achieved by the multiple exposures you are not actually looking at an HDR, just a few images mashed together to give it the completely unnatural look that many now mislabel as HDR.

    1. Kinda false? What people tend to refer to as HDR now are images rendered from an HDR data set to an SDR image in a way where local contrast is emphasized instead of brightness linearity or global contrast. It’s more technically called a tone map but an HDR image is one of the steps of the process.

  14. One of the things that Apple is doing is grain reduction. And they do that very successfully.

    They must be doing some AI tricks, though, because when you shoot with low enough light level, it can infer structures into your photo that didn’t exist. It’s like a lightweight version of Google’s deep dream experiments.

    It’s kinda cool. Maybe a little abstract / painterly. I want to see someone exploit it on purpose.

    It’s not, however, an effect I’d like on my camera at all times. (I shoot a real camera when I shoot photos, though, so I’m a bit of a control freak that way.)

  15. Quick poll: how many people making comments took the time to look at the user interface of the Camera app on an iPhone?

    The one I’m looking at right now has a little oval in the upper right corner labeled ‘HDR’ that turns HDR on and off. There’s also a checkbox in the preferences that turns ‘Smart HDR’ on or off.

    An additional point that seems to have escaped notice so far is the fact that there’s no such thing as ‘a RAW format’. There are proprietary/undocumented ones, but no standard. The closest thing is ISO 12234-2, aka: TIFF/EP.. and imagine my surprise at discovering the image data in a non-HDR iPhone HEIC file is a TIFF.

    So the dominant theme so far seems to be “I want to complain about the default settings of an app I don’t know on a device I consider myself too good to own. I’ll also complain that the device fails to support a feature that doesn’t actually exist.”

    1. There would be DNG which is TIFF/EP, and is an open and non-encumbered specification, but the open source folks of course resist making it a standard because it’s from Adobe instead of by the community.

      “It wasn’t done properly, do it again.” – said the chief engineer who came to work late one day and saw the problem had already been solved in a satisfactory manner.

  16. HDR is stupid so if you combine it with stupid users you get a hideous result, however in the right hands if used in a minimalist way it can enhance an image greatly as part of an overall raw to presentation pipeline. So if you are serious always shoot raw, then process the final image later. A simple compromise is to save both forms at the same time and even let the user slider fade between the two to indicate their personal preference. A particularly smart system would learn what the users preferences were and anticipate them with “familiar” image subjects. That example HDR image above is about as ugly as you can get, I could get a better result in GIMP just by pulling the colour space around on the LDR “original” which BTW is not an original and has evidence of being filtered to enhance the warm colours. :-) See the following for a quick example of what I mean, which would have been far nicer if I had the raw data to work with! https://imgur.com/a/3Qkxqmq

  17. I’ve been exploring today with Samsungs new Ultra phone and was blown away by the 108 megapixel mode(and with “detail enhance mode” too) it captures so much detail and you can zoom in a lot into the picture without loosing detail, it’s the closest I’ve ever seen to my Canon 6D dslr

  18. Currently using an iPhone, and it’s not really the AI enhancing features that I loved from the recent updates. It’s the built in QR code scanner, and the ability to copy text from the photos using the built in camera app! I know I know there’s always a (3rd party) app for that, but this just allows me to free up yet another (probably) one-time-use app.

      1. While I don’t want to start an Android iOS war, it’s true that iOS has been lacking in features compared to Android. it’s just those niche use cases where I wish iOS had this or that, but for how I use my phone on a daily basis iOS surprisingly is enough for what I do. If I need to do more any way I would just pend to my PC.

  19. While I’m always fascinated with this topic, and the article does a good job covering it’s various aspects, I do think that the issue of HDR is an interesting, but temporary confluence of the conflict between computational and more “objective” photography.

    For me, the Dual-Iso feature enabled on Canon cameras via magic lantern was a huge turning point for my photography. No longer forced to decide between crushing shadows or blowing out highlights did wonders for my appreciation of color-balance in images.

    The way the feature works is that it alternates the ISO on every horizontal line of pixels, gathering color information it can then share with its neighbors. The HDR image being gathered in a single shot, with no temporal-fuzziness about when/where these photons are arriving at the sensor is a comfort if you’re also not interested in getting strange artifacts in your images as a result of merging separate images. And best of all, the resulting images did not have the weird radioactive HDR look. They could be edited easily in Lightroom and tweaked to have as much or as little contrast as you like, without any intense tone-mapping.

    I see no reason why a phones couldn’t use this same methodology.

  20. Recently, a famous federal case attempted to use cell phone video to implicate the defendant. Later, a second and third video debunked the cell phone vid. Several amateur Youtube experts were able to recreate the illusion using stock camera settings. It will be interesting as these automatic image manipulators become commonplace and more cellphone video shows up in court. Wouldn’t want people to be convicted due to manipulated evidence.

    1. The prosecutor was flat busted using downres video trying to railroad that kid in Wisconsin.

      Turns out he also submitted screen caps from his laptop revealing video scaling software installed. Claimed ‘I know nothing, I see nothing. What is video scaling?’.
      He should be disbarred, but won’t be.
      Cops got nothing on god damn lawyers. Thin shit colored line…

      1. Umm, Rittenhouse? Where the heck do you get your news?!

        The prosecution was using an iPad to present a video (drone footage) to the court & the defence came up with the wild notion that pinch-to-zoom uses artificial intelligence “to create what they believe is happening”. The judge put the burden of proof on the prosecution to disprove the allegation, so they instead switched to enlarging stills from a laptop. And once again, the prosecution alleged that enlarging an image will “create structures, objects, people, handguns”.

        The prosecution was contesting Rittenhouse’s assertion that he hadn’t been pointing his gun at people in the street, contrary to eyewitness accounts & what was evidenced in the video.

        Just to emphasize, this is the level of argument that was used by the defence & accepted by the judge:
        —–
        “iPads, which are made by Apple, have artificial intelligence in them that allow things to be viewed through three-dimensions and logarithms,” the defense insisted. “It uses artificial intelligence, or their logarithms, to create what they believe is happening. So this isn’t actually enhanced video, this is Apple’s iPad programming creating what it thinks is there, not what necessarily is there,” they added.
        —–

        1. The prosecutor first downrezed the video then scaled it up to show what he wanted. While not giving a copy of the full resolution video to the defense. The last part is why it (both the manipulated video and the original, which didn’t show what you claim) was flat excluded by the judge.

          Ambulance chaser was so stupid, he was caught (in screencaps) with software to do this on his laptop. It could not have happened by being texted etc (the claims made to cover the shyster’s shenanigans, but completely wrong resolutions in evidence).

          The jury didn’t believe your version of the story. Nobody not completely deranged did. It was prosecutorial misconduct, writ large.

          Best evidence is (though not admissable, the full rez video shows:) that he maintained decent barrel discipline until it was time to shot the bastards in self defense. Then he did.
          Case is closed, argue with the jury.

  21. > “preserve what the human eye sees in a single moment in space-time”

    Except, our visual system DOESN’T work like this. And no, you can’t separate out just the eye – and even if you could, the eye still doesn’t work like that. Old-school cameras worked like this – that’s what you’re really comparing to, and that’s a valid comparison.

    But our visual system works a lot more like recent advancements in temporal-based computational photography. It not only does things like combines data from micromovements of the eyes over time (saccades), but also fills in parts of our vision based on what our mind expects to see, and also throws away a lot of data. Some of that is compensating for the physical limitations of our eyes (like our eye’s blind spot – the optic disc where there are no photoreceptors), some of it is optimisations for conscious perception.

    > “It’s arguably more software than hardware”

    Which is exactly how I’d describe our visual system.

    I’m not against single-point-in-time non-computational photography – it’s a valid way of capturing things that can produce beautiful results. But to argue that that is the way we see the world, and/or it’s somehow better or more pure because of that, is wrongly based on a notion that’s almost a century out of date.

    I don’t think it’s even valid to argue either approach to photography is better. Most uses of photography is art, which is inherently subjective. At best you could argue they’re slightly different mediums, one of which you may subjectively like more than the other.

    Admittedly not all photography is for artistic purposes – then the argument for the method become more objective. But you still can’t use our visual system as a reference point – you can study isolated details that are objective, but as a whole each of our perceptions are inherently subjective.

    (Context: My academic background is in both Computer Science and Psychology. And although this wasn’t officially my area of focus in either discipline, I spent a lot of my spare time in the psychology dept’s Vision Lab.)

    1. Indeed, though AI messing around has serious limits, and has created some very amusing and obvious mistakes from time to time as the machines are really quite stupid, so its pretty fair to say the output of these AI driven smartphone camera array is always noticeably not a true representation of what you were seeing, largely as its having to invent too much.

      Where because your eye always changes focus based on its targets the DSLR style adjustable lens camera with or without focus stacking wins because the point of interest is truly in focus, which is much more like how we perceive the world if not exactly true to how the eyes work – the important detail we focus on is sharp, so when adding focus stacking and good aperture, iso (etc) selection gets you a tiny number of images that combined make a whole image that can be much more true to what we would have seen looking at that scene because the whole image was in focus (better than the small camera anyway) at the time of capture, a smartphone camera can’t really do that – its trying to fake being in focus and invent all the details it never captured in the first place!

      When its all about art it doesn’t matter at all, whatever tool does what you need is good enough. But in many ways the old camera style is a much more true image to our perception as whatever detail is supposed to be the target (or the whole image with photo stacking) isn’t full of AI guesswork to invent the missing details and smoothing to blend the additions, its all the real details as we could/would have seen them.

Leave a Reply to OstracusCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.