Monday, May 15, 2023

Context SDK

Felix Krause:

Today, whether your app is opened when your user is taking the bus to work, in bed about to go to sleep, or when out for drinks with friends, your product experience is the same. However, apps of the future will perfectly fit into the context of their users’ environment.


Context SDK leverages machine learning to make optimized suggestions when to upsell an in-app purchase, what type of ad and dynamic copy to display, or predict what a user is about to do in your app, and dynamically change the product flows to best fit their current situation.


Meta has published data on how “less is more” when it comes to notifications and user prompts[…] With Context SDK, you can significantly reduce the number of prompts you show to your users, and a result increase your conversion rates.

Via Dave Verwer:

This would instantly be deep into “creepy” territory if that data was being sent back to some company’s server to be stored and cross-referenced against loads of other data, but the SDK doesn’t request any additional app permissions and never sends a network request. It all happens on-device.

There is something that keeps my “this doesn’t feel quite right” sense tingling. I think it comes down to many years of hearing story after story of unscrupulous companies doing dubious (or awful) things with large amounts of behaviour data, though, rather than being related to anything this SDK is doing.


This idea feels like a win for everyone involved. Users get fewer calls to action at inconvenient times, and developers get happier users who are slightly more likely to respond to a CTA.

It’s not clear to me what the pricing is or whether you get access to the library’s source.

3 Comments RSS · Twitter · Mastodon

Old Unix Geek

The source code will only tell you what sources of information they are using (such as the accelerometer). It won't tell you much as to how the algorithm is actually detecting each scenario (it'll be a number of matrices).

For instance, the frequency distribution of the accelerometer on a table will be concentrated towards low frequencies, particularly the DC component.

If a person is holding the phone, the frequencies will be higher because humans tremble. However if they are looking down at it while sitting the angle will be different from use in bed.

Again someone running and biking should have different frequency distributions, and of course these things also involve moving at certain speeds.

It seems to me that the data and processing should stay on device, in which case it should about as creepy as the iPhone measuring the number of steps you took. (CoreML is used to process the data).

It's a clever idea, and wouldn't be too hard to replicate if one lost trust in the developer.

@OUG I wasn’t thinking so much of reverse engineering it as just having control of the dependency.

Not sure I understand the fixation on whether the network is used or not. It's still an invasion of privacy for your own device to learn things about you, or act on data that is learned about you, without your consent, surely? I certainly don't consider Apple's "intelligence" features (under the Siri umbrella) privacy-respecting. Maybe it's just me but the purposes to which this sort of measurement data are put should be something you decide.

Leave a Comment