iPhone 18 Pro Variable Camera Aperture Rumor
Hartley Charlton (Ming-Chi Kuo):
Apple’s iPhone 18 Pro models will feature an upgraded main rear camera with a variable aperture for the first time, according to Apple analyst Ming-Chi Kuo.
[…]
The iPhone 14 Pro, 15 Pro, and 16 Pro’s main cameras feature a fixed aperture of ƒ/1.78. A variable aperture on future iPhone models would allow the main camera to control the amount of light entering the lens, allowing it to adjust to different lighting conditions. It also would provide more control over depth of field, enabling sharper focus on subjects or smoother background blur.
I continue to find the iPhone 15 Pro camera to be a regression for certain types of photos, and I think it’s because of the shallower depth of field. I often photograph a person sitting at a table with something in front of them. With previous iPhones, I didn’t have to think about it. I could just take a photo and it would be OK: the person’s face reasonably in focus and the document/cake/game also easily visible. Now, I get a lot of photos where either the object or the person—it does not always prioritize the face—is blurry. It’s not really evident in the view finder that this is going to happen. If I remember, I can use manual focus to prioritize one or the other, and whichever I pick will look much better than with previous iPhones, but it’s still not really what I want because the secondary subject still ends up blurry.
Previously:
2 Comments RSS · Twitter · Mastodon
I completely agree with the problem you have, and frequently complain about this to my wife. One of the iPhone’s great qualities prior to the 15 Pro (and maybe the 14 Pro) was its infinite focus. In certain use cases, it made the iPhone the best tool for a photo.
But now, I’m constantly struggling with depth of field, and the images I get are not good enough that they compare to my Canon.
So I’m left with a camera that is less useful in the situation it was previously most useful in professionally.
It’d be great to select the aperture, especially if they’re able to computationally render the available light similarly. They could probably do that by capturing multiple photos, as they do now, but have some of the photos capture multiple focus points at the lowest aperture to gather the most light, then blend the data together into one frame that achieves the desired look for the user with simulated depth of field.
I’d love a feature like that.
Camera enthusiasts celebrate a blurred background, or “bokeh,” of big sensors as the pinnacle of photographic art. Personally, I often prefer a deeper depth of field for more sharpness throughout the image.