Computational Photography and the Pixel
Sam Byford (via Nick Heer):
Clearly, this is by far the most competitive Google has ever been in mobile photography. But the Pixel phones, on paper, don’t have cutting-edge hardware, relying on an f/2.0 lens without optical image stabilization. Instead, and in typical Google fashion, Google has turned to complex software smarts in order to power the Pixel camera.
[…]
This no-compromise approach to HDR photography has partly been made possible by new hardware. The Hexagon digital signal processor in Qualcomm’s Snapdragon 821 chip gives Google the bandwidth to capture RAW imagery with zero shutter lag from a continuous stream that starts as soon as you open the app. “The moment you press the shutter it’s not actually taking a shot — it already took the shot,” says Levoy. “It took lots of shots! What happens when you press the shutter button is it just marks the time when you pressed it, uses the images it’s already captured, and combines them together.”
[…]
The traditional way to produce an HDR image is to bracket: you take the same image multiple times while exposing different parts of the scene, which lets you merge the shots together to create a final photograph where nothing is too blown-out or noisy. Google’s method is very different — HDR+ also takes multiple images at once, but they’re all underexposed. This preserves highlights, but what about the noise in the shadows? Just leave it to math.
[…]
Google also claims that, counterintuitively, underexposing each HDR shot actually frees the camera up to produce better low-light results. “Because we can denoise very well by taking multiple images and aligning them, we can afford to keep the colors saturated in low light,” says Levoy.
Whereas iOS won’t let me always take photos using HDR—which is the non-lossy choice since the phone also saves the non-HDR version—Google enables HDR by default and intends for you to leave it on.