Tim Hardwick:
Microsoft today discontinued its Cortana mobile app. As a result, the company has ended all support for third-party Cortana skills and eliminated the Cortana app for iOS and Android devices.
[…]
The eponymous mobile app was originally launched in November 2018, but apparently never gained a user base big enough in its short lifetime for Microsoft to consider it worth maintaining.
Previously:
Update (2021-03-31): Tanner Bennett:
Not that I would have even used it, but Cortana as an app could never compete with the built in virtual assistant. It’s a shame regulatory action hasn’t ever been taken to force Apple to allow us to set a default assistant.
Android Cortana iOS iOS 14 iOS App Microsoft Sunset
Becky Hansmeyer (tweet):
Turns out, if you want to sync Core Data-backed data between devices and have those changes reflected in your UI in a timely manner, you have some more work to do. To figure out what that work is, you can’t look at Apple’s Core Data templates. You have to look at their sample code.
[…]
Don’t be like me: make sure your schema is deployed.
After launch, remember that you still have to do this every time you change your Core Data model, before you release your update to testers or the App Store. If your production CloudKit schema doesn’t properly correspond to your production Core Data model, syncing is going to break in all kinds of terrifying ways.
Previously:
CloudKit Core Data iCloud iOS iOS 14 Mac macOS 11.0 Big Sur Programming Swift Codable Syncing
Apple Frameworks Engineer:
Set
in Swift is an immutable value type. We do not recommend making Core Data relationships typed this way despite the obvious convenience. Core Data makes heavy use of Futures, especially for relationship values. These are reference types expressed as NSSet
. The concrete instance is a future subclass however. This lets us optimize memory and performance across your object graph. Declaring an accessor as Set
forces an immediate copy of the entire relationship so it can be an immutable Swift Set
. This loads the entire relationship up front and fulfills the Future all the time, immediately. You probably do not want that.
It’s so convenient, though, and often it doesn’t matter because it’s a small relationship or one that you will be fully accessing anyway. Perhaps the answer is to provide a duplicate set of NSSet
accessors for use when you want the lazy behavior enabled by the class cluster.
Similarly for fetch requests with batching enabled, you do not want a Swift Array
but instead an NSArray
to avoid making an immediate copy of the future.
Needless to say, the documentation doesn’t mention this, but it does do a good job of explaining what fetchBatchSize
does:
If you set a nonzero batch size, the collection of objects returned when an instance of NSFetchRequest
is executed is broken into batches. When the fetch is executed, the entire request is evaluated and the identities of all matching objects recorded, but only data for objects up to the batchSize
will be fetched from the persistent store at a time. The array returned from executing the request is a proxy object that transparently faults batches on demand. (In database terms, this is an in-memory cursor.)
You can use this feature to restrict the working set of data in your application. In combination with fetchLimit
, you can create a subrange of an arbitrary result set.
Under the hood, this works by eagerly fetching the object IDs and lazily fetching and caching the objects, in batches, as they are accessed. The implementation is more optimized than what you could implement yourself, passing the object IDs to SQLite via temporary tables rather than as parameters to the SQL statement. There are some caveats to be aware of:
If you’re using a coordinator with multiple stores, it will get the sorting right, fetching multiple batches and merging them together without ever doing a giant fetch. However, it does seem to eventually load all the objects into memory at once, which mostly defeats the purpose of batching. If you can hold everything in memory but just prefer not to, I guess you could refault all the objects after the sorting has completed and let the special array bring them back as needed. Or, you can avoid combining fetchBatchSize
with multiple stores and instead use a dictionary fetch request to get just the object IDs and the properties needed for sorting, save the IDs, and manually fetch batches of full objects as needed.
I’m a little worried that there are bugs related to multiple stores. Disassembling _PFBatchFaultingArray
shows code that anticipates sometimes receiving more object IDs than it expected to fetch, and this has occurred in the wild. It’s looks as if Core Data is querying the Z_PK
without regard for which store it’s supposed to be in. However, I tried to reproduce this situation by deliberately creating objects in multiple stores with the same Z_PK
and everything seemed to work as expected on macOS 10.15.7.
So, how do you get the optimized fetchBatchSize
behavior when using Swift? The Apple engineer suggests using an NSArray
, which I take to mean casting the result of the fetch via as NSArray
to disabling automatic bridging and give your Swift code the original NSArray
. However, my experience is that this doesn’t work. All the objects get fetched before your code even accesses the array. I think it’s because the special as
behavior is for disabling bridging when calling Objective-C APIs from Swift, but NSManagedObjectContext.fetch(_:)
is an overlay method implemented in Swift, not just a renaming of -[NSManagedObjectContext executeFetchRequest:error:]
.
This can be worked around by using an Objective-C category to expose the original method:
@interface NSManagedObjectContext (MJT)
- (nullable NSArray *)mjtExecuteFetchRequest:(NSFetchRequest *)request error:(NSError **)error;
@end
@implementation NSManagedObjectContext (MJT)
- (nullable NSArray *)mjtExecuteFetchRequest:(NSFetchRequest *)request error:(NSError **)error {
return [self executeFetchRequest:request error:error];
}
@end
Then you can implement a fetching method that preserves the batching behavior:
public extension NSManagedObjectContext {
func fetchNSArray<T: NSManagedObject>(_ request: NSFetchRequest<T>) throws -> NSArray {
// @SwiftIssue: Doesn't seem like this cast should be necessary.
let protocolRequest = request as! NSFetchRequest<NSFetchRequestResult>
return try mjtExecute(protocolRequest) as NSArray
}
func fetch<T: NSManagedObject>(_ request: NSFetchRequest<T>,
batchSize: Int) throws -> MJTBatchFaultingCollection<T> {
request.fetchBatchSize = batchSize
return MJTBatchFaultingCollection(array: try fetchNSArray(request))
}
}
The first method gives you the NSArray
, but that is not very ergonomic to use from Swift. First, you have to cast the objects back to your NSManagedObject
subclass. Second, it doesn’t behave well when an object is deleted (or some other SQLite error occurs) between your fetch and when Core Data tries to fulfill the fault.
If you’re using Swift, you can’t catch the NSObjectInaccessibleException
, so you should be using context.shouldDeleteInaccessibleFaults = true
. This means that instead of an exception you get a sort of tombstone object that’s of the right class, but with all its properties erased.
But it’s hard to remember to check for that each time you use one of the objects in the NSArray
, and you probably don’t want to accidentally operate on the empty properties. So the second method uses a helper type to try to make the abstraction less leaky, always giving you either a valid, non-fault object or nil
:
public struct MJTBatchFaultingCollection<T: NSManagedObject> {
let array: NSArray
let bounds: Range<Int>
// array is presumed to be a _PFBatchFaultingArray from a fetch request
// using fetchBatchSize.
public init(array: NSArray, bounds: Range<Int>? = nil) {
self.array = array
self.bounds = bounds ?? 0..<array.count
}
}
extension MJTBatchFaultingCollection: RandomAccessCollection {
public typealias Element = T?
public typealias Index = Int
public typealias SubSequence = MJTBatchFaultingCollection<T>
public typealias Indices = Range<Int>
public var startIndex: Int { bounds.lowerBound }
public var endIndex: Int { bounds.upperBound }
public subscript(position: Index) -> T? {
guard
let possibleFault = array[position] as? T,
let context = possibleFault.managedObjectContext,
// Unfault so that isDeleted will detect an inaccessible object.
let object = try? context.existingObject(with: possibleFault.objectID),
let t = object as? T else { return nil }
return t.isDeleted ? nil : t
}
public subscript(bounds: Range<Index>) -> SubSequence {
MJTBatchFaultingCollection<T>(array: array, bounds: bounds)
}
}
extension MJTBatchFaultingCollection: CustomStringConvertible {
public var description: String {
// The default implementation would realize all the objects by printing
// the underlying NSArray.
return "<MJTBatchFaultingCollection<\(T.self)> bounds: \(bounds)>"
}
}
It’s still a bit leaky, because you have to be careful to only access the collection from the context’s queue. But this is somewhat obvious because it has a separate type, so you’ll get an error if you try to pass it to a method that takes an Array
.
The batch faulting behavior and batch size are preserved if you iterate over the collection or slice it. (When iterating the NSArray
directly, small batch sizes don’t work as expected because NSFastEnumerationIterator will always load at least 16 objects at a time.)
Previously:
Core Data Documentation iOS iOS 14 Mac macOS 10.15 Catalina macOS 11.0 Big Sur Objective-C Optimization Programming Swift Programming Language
Apple Frameworks Engineer:
Additionally you should almost never use NSPersistentStoreCoordinator
’s migratePersistentStore
method but instead use the newer replacePersistentStoreAtURL
. (you can replace emptiness to make a copy). The former loads the store into memory so you can do fairly radical things like write it out as a different store type. It pre-dates iOS. The latter will perform an APFS clone where possible.
Tom Harrington:
[This] method is almost totally undocumented, so you’re on your own working out how to use it. The dev forums post mentioned above is from summer 2020. The replacePersistentStore(...)
method was introduced five years earlier in iOS 9, but the forum post was the first time most of the information appeared.
[This] is the first suggestion I’ve seen that migratePersistentStore(...)
might not be a good idea anymore. It’s not deprecated and I haven’t seen any previous source recommending against its use.
There are some comments in the header.
Incidentally you won’t find this if you’re using Swift and ⌘-click on the function name. You need to find the Objective-C header. One way to do this in Xcode is to press ⌘-shift-O and start typing the class name.
[…]
Its declaration says it can throw. I tried intentionally causing some errors but it never threw. For example, what if sourceURL
points to a nonexistent file? That seems like it would throw, especially since the function doesn’t return anything to indicate success or failure. It doesn’t throw, although there’s a console message reading Restore error: invalidSource("Source URL must exist")
.
He’s figured out a lot, though other important details like the APFS support remain a mystery.
Tom Harrington:
The demo app I’ve been using is now on GitHub. You can take a look here. Or go directly to the diff of replacing migrate
with replace
here.
[…]
The backup process is simpler than it used to be, because replace
doesn’t have the same side-effect that migrate
did of unloading the persistent store.
[…]
Even though the migrate
and replace
methods seem pretty similar, the semantics are slightly different when the destination is a currently-loaded store. My new restore code reflects that.
Previously:
Apple File System (APFS) Core Data Documentation iOS iOS 14 Mac macOS 11.0 Big Sur Programming Swift Programming Language Xcode