Tuesday, September 24, 2024

What Is a Photo?

Nilay Patel:

It’s also notable what isn’t present on the iPhone this year: there’s no generative AI wackiness at all. There’s no video boost or face-swapping, no adding yourself to group photos, no drawing to add stuff with AI like on the Pixel or Galaxy phones — really, none of it. I asked Apple’s VP of camera software engineering Jon McCormack about Google’s view that the Pixel camera now captures “memories” instead of photos, and he told me that Apple has a strong point of view about what a photograph is — that it’s something that actually happened. It was a long and thoughtful answer, so I’m just going to print the whole thing[…]

John Gruber has quoted the relevant section, and it’s getting rave reviews. Maybe I’m just too dumb to see the profundity, but I don’t think there’s any there there. These are pleasant sounding words along the lines of “music is in our DNA,” but what is the connection to what the Camera app actually does? Reports are that the photos by default look more processed than before. And Apple, like Samsung and Google, has been including features for years that make the photos not what actually happened.

Federico Viticci:

“Something that really, actually happened” is a great baseline compared to Samsung’s nihilistic definition (nothing is real) and Google’s relativistic one (everyone has their own memories). […] But I have to wonder how malleable that definition will retroactively become to make room for Clean Up and future generative features of Apple Intelligence.

What McCormack said is that a photo is a “celebration of something that really, actually happened,” not that the image in the photo actually happened.

Google lets you celebrate a moment where two people were actually standing next to each other by creating such an image from two separate captures where they were standing alone. Apple lets you take a photo of multiple people and objects and remove some of them. What is the distinction here that amounts to a strong point of view? It just seems like a difference in degree. Arguably, the Google example is more truthful in that it’s helping you recreate an actual moment, whereas the Apple one is letting you tune it up to be more what you remembered or wished for than the reality.

If we were talking about this last year, people would say that there’s a big philosophical difference because—although they both combine multiple exposures, add fake bokeh, and use AI to adjust colors and focus, etc.—Android phones let you remove objects and iPhones don’t. But now Apple is adding that, too. If there’s a bright line distinction, I think that was it. Apple crossed it, and I don’t think they’ll stop there. This is fine. It’s a popular feature, and I know people who were considering switching to Android because of it.

Nick Heer:

In my testing of Clean Up on an image on the latest iOS 18.1 beta build, Apple adds EXIF tags to the image to mark it as being edited with generative A.I. tools. EXIF tags can be erased, though, and I do not see any other indicators.

Previously:

Update (2024-09-25): René Fouquet:

1. Given the iPhone’s image processing pipeline–something they proudly speak of at every chance they get–this statement is already demonstrably false. Photos taken on the iPhone definitely do not show something that actually happened. The darker it gets, the more guesswork flows into an iPhone photo. So much guesswork, in fact, that nighttime photos can easily make up things that have never been there in reality. I have taken a couple of photos that interpreted leafs rustling in the wind as something else entirely.

2. How will they frame it when they will eventually add these features? Because let’s be serious for a second here. They didn’t add those features out of some philosophical stance against AI generated content. That is just corporate bullshit. They are simply way, WAY behind their competition when it comes to this and haven’t gotten around to it (yet), or lack the required competence or whatever. It’ll be fun to read the spins after they inevitably change course.

Update (2024-10-04): The Talk Show:

Nilay Patel returns to the show to consider the iPhones 16.

17 Comments RSS · Twitter · Mastodon


I'd like to think that a photograph is an accurate depiction of reality, warts & all. Not something prettified by algorithms in any way, in order to make it 'better' for instant consumption on social media.

Programmers should not decide how reality should look, but the results are shown in every smartphone image comparison video on YouTube. Same reality, but shown simultaneously via different phone cameras. The result are shocking, IMHO. I don't want my recollection of the actual reality enhanced, sharpened, filtered, upgraded, or whatnot.

And shooting RAW won't save you: Pro RAW and ExpertRAW are all AI processed before being saved, you can watch it happening before your eyes.

Process Zero to the rescue!


@Peter Process Zero does processing, too…it’s just meant to be more minimal/tasteful.


Has Apple walked back the amount of post-processing they do on the iPhone? Last I remember anyone talking about it, and the images I saw, what iPhones were generating wouldn't come remotely close to a reasonable definition of a "photograph".

"Photo-referenced Computer-generated imagery" would be a better description.

Fake depth of field, fake lihting effects, munging together multiple exposures to create a single "idealised" image, these are all just as fake as Samsung replacing the moon in their camera photos with a file image.

As for the a justification that it's making an image "as you remember it"; the mere act of being exposed to the fake image will rewrite your memory of the event to conform to the image you see. That's how memory works. That's why this stuff is so insideous.


> Put simply, it seems Apple’s perspective is to try to accurately capture a scene as it occurred. While images may have taken on a too-processed look for my liking, the intention seems to be to capture light as it was, not simulate a memory which never occurred.

Apple spewing all this philosophical nonsense about capturing reality. This is the same company that wants you to strap heavy goggles on your face and immerse yourself in a world of make believe all day. Okie dokie.


Mac Folklore Radio

> Has Apple walked back the amount of post-processing they do on the iPhone?

Portraits and munging you can turn off, but you still get "oil painting"-levels of noise reduction in low light. On my 16 Pro, flowers taken on a cloudy day look alarmingly flat, posterized, dull. I haven't tried Process Zero yet.

I bought a relatively inexpensive mirrorless camera for use as a webcam. Now I realize it's sort of pocketable, and on a good day I can click the shutter and have the images show up in my iPhone's camera roll (or whatever they call it now). I think that will become my new phone camera. :\

First Apple software quality dropped off, now the hardware's starting to erode. The butterfly keyboard was a warning shot.

Apple of the 2020s doesn't know how to do anything well apart from rake in all that sweet, sweet subscription revenue. "Tim is not a product guy", etc.


I really don't mind post processing of images, even the removal of cars, trashcans etc, but I don't like how most phones are doing it. Too much of everything.

And I can't stand fake background blur.

WRT tagging images that have been altered with "ai" I think it's way too late for that. We need to have the opposite, a certificate for agencies and news outlets that promise not to add or remove stuff from images.


Before photographs, the only way to be sure of something was to go see it for yourself.

With photographs, for the last couple hundred years, if you saw a photo, then you could no for sure that thing existed/was like that. That stopped being the case probably a decade ago (maybe a couple decades ago for state actors).

But we still had video. Now sadly that is gone as well. It's no longer possible to tell from a photo or a video whether something is real or not (certainly not without forensic analysis which would then require you trusting the person doing the analysis).

What we need is something akin to the Secure Enclave - something that cryptographically signs the raw pixels so we can have confirmed images with minimal trust requirements (eg, trusting that Apple is doing the signing correctly on raw pixels).

Ahh well, the age of certainty is past and we live in a post-truth world now.


As a photographer I appreciate Jon McCormack's statement of Apple's 'intent' but yeah the Magic Eraser is hard to square with that statement.

As a photographer I always wanted to record reality and never took objects out of pictures - but I remember one occasion in particular where a group shot of 30 school kids in front of a newly restored mural / painting at a school where in each of 7 pictures one child had their eyes closed I struggled with the decision but decided it was best to take the open eyes from one shot and paste them into another - rather than having one boy ridiculed after it appeared in the newspaper and his family not even want a copy of the locally historic moment.

It took a good 45mins to an hour as even in the split second between shots the angle of the eyes had rotated less than a degree but it had to be matched up exactly to look right.

I still struggled with it but felt that it was ethically the right thing to do. The intent matters.

And of course we all recognise that since the earliest days of photography it always offered a particular version of reality as we don't generally see things in black and white - but it was still a particular version of reality. You'll see a lot of ghoulishly blurred people in early streetscapes as the exposure time was so long and not everyone was staying still.

The nature of freezing a moment in time is surreal in itself as we can't do that with our eyes.

It's a slippery slope - I'd prefer that Apple didn't have the magic erasure feature but it seems to be what a large number of people want and those people don't care about recording reality - they just want a nice picture. I guess there is no right or wrong but the danger of overly doctored photographs and that they are so easily doctored is that it is already having profound impact in whether you can believe what you see.

And no - processing an image isn't a digital invention - darkroom techniques have been used since the early days to bring detail out of shadow areas but it was usually with intent to make a print that was closer to what your eyes saw than was possible with the latitude of film.

viz: https://www.dpreview.com/news/8199212191/dodging-burning-microwaving-a-look-inside-ansel-adams-darkroom


I wonder how many responding to this are actually working photographers? From the moment it was born, almost 200 years ago, photography has been playing with, altering, just generally messing about with the "capture of "reality".

There was never an age of certainty beyond our mutual agreement of what was real — the photograph has always been willing and able to lie.

What we really should be talking about is "image making", for that is what this is. THe "photograph" is the PHYSICAL manifestation of the capture. The PHOTOGRAPH is the OBJECT created from the image capture, collaged, commposited, or manipulated before being committed to a physical output.


Just to clarify it was a different child in each shot with eyes closed - so I had the correct eyes open to paste in meticulously :)


Last two commenters got it right, no surprise they're also photographers. In the example with the kids, it was the same kids in the same spot just seconds apart. All the real data was there to put it back to the way it really would have been if the right millisecond could be captured in the first place.

And there's no way to capture the actual reality from photons hitting a light sensitive medium. Aiming for maximum accuracy at what the human eye would see, vs composing a pleasing photograph are two different things. The purpose of the final product matters. If it's for the news documenting important events, don't edit it at all. If it's to show the relevant information, light editing for clarity is fine, especially with footnotes. If it's for art, go nuts.

Just don't try to pass off fabricated images as reality, I think that's the real line. But I'm that old guy that was complaining about shitty filters in Hipstamatic. Clearly it's the children who are wrong.


Thanks @Bart

And for anyone interested in the debate, it isn't new -

Suggested reading;

On Photography by Susan Sontag (1977)
Ways of Seeing by John Berger (1972)


@Dennis Moser: the fundamental difference for photography has always been it is about the capture of reflected light over a duration of real time.

While I'm primarily a sculptor by training, I did minor in photography at Art School (darkroom, film, enlargers and all), have exhibited photography in group and solo shows, and have my photographic work in both public and private institutional collections.

Not to brag, but more to state that my position is informed by a pretty thorough understanding of the artform and its history. The battle for the real versus Painting has been a long primary theme of photography, BUT there is a good argument to be made that the primary characteristic of photography is that it is a recording of what was actually happening in front of the lens when the shutter was exposing the image capture substrate, and everything in the image that is not that, deducts from how much of the image is a "photograph". Hence I tend toward the description of cellphone auto-processed images as "photo-referenced computer generated imagery". I feel similarly about most post-processing, beyond what can be done with RAW adjustments.

I remember an interesting discussion with a photography major about a feature in Grand Theft Auto Online, which has a camera function, and there were folks going around doing "photography" in the game. My fellow student was adamant it was NOT photography; that the minimum qualification was actual photons reflecting off the real world, and onto a recording medium.

My personal bugbear (and indeed the theoretical basis for my exhibition prints and a paper I wrote) is calling digital prints "limited editions", manufactured scarcity and the illegitimacy of that as the basis for "value" when digital prints don't carry an inherent risk of catastrophic failure, the natural variability of unique objects, nor the consumption of the artists finite life in their creation, that traditional darkroom prints required. But that's getting into some very deep weeds ;)


There have been photographs that have changed history, usually depicting the cruelty of war on children.

Those images have now become very easy to dismiss for those that so wish.


Agree Kristoffer,

The wanton damage to Photography as a medium representing reality by tech companies may end up their greatest impact on society - if they are not careful.

I believe that Police departments / forensics teams use special cameras / settings that embed data in a digital file to (hopefully) prove that a crime scene picture hasn't been altered - but how infallible that is I don't know.

Maybe in future real photographers who want to represent truth and have their images believed will have to use special equipment and all the images produced on smartphones can be happily dismissed as very likely fabricated.

It might end up a boon to real camera manufacturers!


I think a lot of this can easily fall into a scientific versus creativity view on photography. If you strictly see it as photons reflected off real objects and captured onto a medium, then any amount of post processing might be unacceptable. If you want to show your creativity for social media or whatever, then you might be at the other end of the spectrum where every kind of edit you can think of is allowed.

For taking photos, I tend to be close to the former. I want the pics I take to be true to the moment, but I'm OK with some post processing that helps make the capture of that moment the best it can be while still being the truth. Adjusting things like contrast/saturation/sharpness is OK to me. Having that done by the camera with an algorithm isn't much different (to me) than adjusting a RAW image -- you're just accepting the method the algorithm uses for that result.

Most things beyond that are more toward the creative end of the spectrum. At what point does it stop being a photo? That bit about taking a photo of the moon and then having it automatically sub in a stock picture of the moon? To me, that's just wrong.


@DJ I tend to agree with you. But my main interest in writing this post was to discuss what Apple thinks. As best I can tell, McCormack’s position is that Apple isn’t going to do the moon thing, but that’s not saying much.

Leave a Comment