Wednesday, March 31, 2021

Making NSFetchRequest.fetchBatchSize Work With Swift

Apple Frameworks Engineer:

Set in Swift is an immutable value type. We do not recommend making Core Data relationships typed this way despite the obvious convenience. Core Data makes heavy use of Futures, especially for relationship values. These are reference types expressed as NSSet. The concrete instance is a future subclass however. This lets us optimize memory and performance across your object graph. Declaring an accessor as Set forces an immediate copy of the entire relationship so it can be an immutable Swift Set. This loads the entire relationship up front and fulfills the Future all the time, immediately. You probably do not want that.

It’s so convenient, though, and often it doesn’t matter because it’s a small relationship or one that you will be fully accessing anyway. Perhaps the answer is to provide a duplicate set of NSSet accessors for use when you want the lazy behavior enabled by the class cluster.

Similarly for fetch requests with batching enabled, you do not want a Swift Array but instead an NSArray to avoid making an immediate copy of the future.

Needless to say, the documentation doesn’t mention this, but it does do a good job of explaining what fetchBatchSize does:

If you set a nonzero batch size, the collection of objects returned when an instance of NSFetchRequest is executed is broken into batches. When the fetch is executed, the entire request is evaluated and the identities of all matching objects recorded, but only data for objects up to the batchSize will be fetched from the persistent store at a time. The array returned from executing the request is a proxy object that transparently faults batches on demand. (In database terms, this is an in-memory cursor.)

You can use this feature to restrict the working set of data in your application. In combination with fetchLimit, you can create a subrange of an arbitrary result set.

Under the hood, this works by eagerly fetching the object IDs and lazily fetching and caching the objects, in batches, as they are accessed. The implementation is more optimized than what you could implement yourself, passing the object IDs to SQLite via temporary tables rather than as parameters to the SQL statement. There are some caveats to be aware of:

So, how do you get the optimized fetchBatchSize behavior when using Swift? The Apple engineer suggests using an NSArray, which I take to mean casting the result of the fetch via as NSArray to disabling automatic bridging and give your Swift code the original NSArray. However, my experience is that this doesn’t work. All the objects get fetched before your code even accesses the array. I think it’s because the special as behavior is for disabling bridging when calling Objective-C APIs from Swift, but NSManagedObjectContext.fetch(_:) is an overlay method implemented in Swift, not just a renaming of -[NSManagedObjectContext executeFetchRequest:error:].

This can be worked around by using an Objective-C category to expose the original method:

@interface NSManagedObjectContext (MJT)
- (nullable NSArray *)mjtExecuteFetchRequest:(NSFetchRequest *)request error:(NSError **)error;
@end

@implementation NSManagedObjectContext (MJT)
- (nullable NSArray *)mjtExecuteFetchRequest:(NSFetchRequest *)request error:(NSError **)error {
    return [self executeFetchRequest:request error:error];
}
@end

Then you can implement a fetching method that preserves the batching behavior:

public extension NSManagedObjectContext {
    func fetchNSArray<T: NSManagedObject>(_ request: NSFetchRequest<T>) throws -> NSArray {
        // @SwiftIssue: Doesn't seem like this cast should be necessary.
        let protocolRequest = request as! NSFetchRequest<NSFetchRequestResult>        
        return try mjtExecute(protocolRequest) as NSArray
    }

    func fetch<T: NSManagedObject>(_ request: NSFetchRequest<T>,
                                   batchSize: Int) throws -> MJTBatchFaultingCollection<T> {
        request.fetchBatchSize = batchSize
        return MJTBatchFaultingCollection(array: try fetchNSArray(request))
    }
}

The first method gives you the NSArray, but that is not very ergonomic to use from Swift. First, you have to cast the objects back to your NSManagedObject subclass. Second, it doesn’t behave well when an object is deleted (or some other SQLite error occurs) between your fetch and when Core Data tries to fulfill the fault.

If you’re using Swift, you can’t catch the NSObjectInaccessibleException, so you should be using context.shouldDeleteInaccessibleFaults = true. This means that instead of an exception you get a sort of tombstone object that’s of the right class, but with all its properties erased.

But it’s hard to remember to check for that each time you use one of the objects in the NSArray, and you probably don’t want to accidentally operate on the empty properties. So the second method uses a helper type to try to make the abstraction less leaky, always giving you either a valid, non-fault object or nil:

public struct MJTBatchFaultingCollection<T: NSManagedObject> {
    let array: NSArray
    let bounds: Range<Int>

    // array is presumed to be a _PFBatchFaultingArray from a fetch request
    // using fetchBatchSize.
    public init(array: NSArray, bounds: Range<Int>? = nil) {
        self.array = array
        self.bounds = bounds ?? 0..<array.count
    }
}

extension MJTBatchFaultingCollection: RandomAccessCollection {
    public typealias Element = T?
    public typealias Index = Int
    public typealias SubSequence = MJTBatchFaultingCollection<T>
    public typealias Indices = Range<Int>
    
    public var startIndex: Int { bounds.lowerBound }
    public var endIndex: Int { bounds.upperBound }
       
    public subscript(position: Index) -> T? {
        guard
            let possibleFault = array[position] as? T,
            let context = possibleFault.managedObjectContext,
            // Unfault so that isDeleted will detect an inaccessible object.
            let object = try? context.existingObject(with: possibleFault.objectID),
            let t = object as? T else { return nil }
        return t.isDeleted ? nil : t
    }

    public subscript(bounds: Range<Index>) -> SubSequence {
        MJTBatchFaultingCollection<T>(array: array, bounds: bounds)
    }
}

extension MJTBatchFaultingCollection: CustomStringConvertible {
    public var description: String {
        // The default implementation would realize all the objects by printing
        // the underlying NSArray.
        return "<MJTBatchFaultingCollection<\(T.self)> bounds: \(bounds)>"
    }
}

It’s still a bit leaky, because you have to be careful to only access the collection from the context’s queue. But this is somewhat obvious because it has a separate type, so you’ll get an error if you try to pass it to a method that takes an Array.

The batch faulting behavior and batch size are preserved if you iterate over the collection or slice it. (When iterating the NSArray directly, small batch sizes don’t work as expected because NSFastEnumerationIterator will always load at least 16 objects at a time.)

Previously:

Comments RSS · Twitter

Leave a Comment