Monday, May 25, 2015 [Tweets] [Favorites]

Camera and Photos on iOS

Daniel Eggert:

On the iPhone, we can only adjust the ISO and the shutter speed. We can hence trade noise (affected by the ISO) against motion blur/sharpness while maintaining the same level of exposure.

That explains why photos at night often look worse than those taken during the day: At night there’s a lot less light. In order to still have an acceptable shutter speed, the auto exposure will bump up the ISO, probably to the maximum of what the camera allows. And even that may not be enough to achieve enough light, so the auto exposure will also lower the shutter speed. This combination results in more noise in the image, and the image being blurred.

Matteo Caldari:

The AVCaptureSessionPresetPhoto selects the best configuration for the capture of a photo, i.e. it enables the maximum ISO and exposure duration ranges; the phase detection autofocus; and a full resolution, JPEG-compressed still image output.

However, if you need more control, the AVCaptureDeviceFormat class describes the parameters applicable to the device, such as still image resolution, video preview resolution, the type of autofocus system, ISO, and exposure duration limits. Every device supports a set of formats, listed in the AVCaptureDevice.formats property, and the proper format can be set as the activeFormat of the AVCaptureDevice (note that you cannot modify a format).

[…]

New in iOS 8 is the option to move the lens to a position from 0.0, focusing near objects, to 1.0, focusing far objects (although that doesn’t mean “infinity”).

[…]

An interesting feature also introduced in iOS 8 is “bracketed capture,” which means taking several photos in succession with different exposure settings. This can be useful when taking a picture in mixed light, for example, by configuring three different exposures with biases at −1, 0, +1, and then merging them with an HDR algorithm.

Saniul Ahmed:

PHAsset’s representsBurst property is true for assets that are representative of a burst photo sequence (multiple photos taken while the user holds down the shutter). It will also have a burstIdentifier value which can then be used to fetch the rest of the assets in that burst sequence via fetchAssetsWithBurstIdentifier(...).

The user can flag assets within a burst sequence; additionally, the system uses various heuristics to mark potential user picks automatically. This metadata is accessible via PHAsset’s burstSelectionTypes property. This property is a bitmask with three defined constants: .UserPick for assets marked manually by the user, .AutoPick for potential user picks, and .None for unmarked assets.

[…]

First, you need to register a change observer (conforming to the PHPhotoLibraryChangeObserver protocol) with the shared PHPhotoLibrary object using the registerChangeObserver(...) method. The change observer’s photoLibraryDidChange(...) method will be called whenever another app or the user makes a change in the photo library that affects any assets or collections that you fetched prior to the change. The method has a single parameter of type PHChange, which you can use to find out if the changes are related to any of the fetched objects that you are interested in.

Sam Davies:

A user can chain incompatible photo edits together — if the adjustment data is not understood by the current extension, the pre-rendered image will be used as input. For example, you can crop an image using the system crop tool before using your custom Photo Editing extension. Once you have saved the edited image, the associated adjustment data will only contain details of the most recent edit. You could store adjustment data from the previous, incompatible edit in your output adjustment data, allowing you to implement a revert function for just your phase of the filter chain. The revert function provided by the Photos app will remove all the edits, returning the photo to its original state.

Comments

Stay up-to-date by subscribing to the Comments RSS Feed for this post.

Leave a Comment