Friday, September 27, 2019

iOS 13 Autocorrect Is Drunk

John Gruber:

One thing I and others have noticed is that when you type a dictionary word correctly — meaning you hit the exact right keys on the on-screen keyboard — iOS 13 autocorrect will replace it with a different dictionary word that makes no contextual sense. Even beyond dictionary words, I’m seeing really strange corrections.

I think this has been going on since before iOS 13.

Previously:

Update (2019-10-11): Philip:

It is so bad. I could rely on iOS 12 to propose the right words and correct typos. iOS 13 feels like I have to re-teach the AI. Also umlauts in German are a mess. It never propose the right word.

Tanner Bennett:

iOS 11 is when they switched to a “machine learning” based autocorrect engine, which is the cause of all this.

Riccardo Mori:

That’s why after switching from my iPhone 5 with iOS 10 to the new iPhone 8 with iOS 12, my initial impressions were that autocorrect was simply worse and felt ‘untrained’. I didn’t realise that what messed things up was the keyboard switching while writing a word.

But even within a single language, autocorrect under iOS 12 does indeed feel less smart than under iOS 10.

10 Comments RSS · Twitter


I can confirm this has been going on for the better part of a year now (fwiw, I use the classic autocorrect bubbles rather than the quicktype bar). The annoying part is that it will often change the word well after it's been typed.


> I think this has been going on since before iOS 13.

You are not alone - I’ve seen similar behaviour for the better part of the year at least on my iPad (working with French and Spanish mostly). What is worse, from my POV is that autocorrect seems obsessed into thinking I want to type in English (correcting French words to something that might be the equivalent English one). I run the OS in English for a number of reasons.


One thing that I've noticed and haven't seen before (and don't know if it's related) --- if my wife types an iMessage to me in Japanese, the suggestion bar will show me Japanese replies even when I'm using the standard US English keyboard (I do also have the Japanese kana keyboard installed... not sure if this is a factor). This seems to be new.


The bug that annoys me most – and this one is likely specific to Australian English – is that place names routinely get turned into all caps. For example, “Willoughby” turns into “WILLOUGHBY.” It’s like they’ve downloaded a list of Australian postcode place names that are supposed to be all caps on physical envelopes. Lazy. Still present in v13. (Same laziness with Siri and Australian place names. You can buy phonetic dictionaries of Australian place names, FFS. Cf. “Woolloomooloo”)


Most noisy-input to text mechanisms, whether it's touch-input-to-text or speech-to-text use something called a "language model".

This exists to predict what the next word in a sentence should be given all words seen before.

When the noisy input is read, it infers a multitude of words that the user could be trying to convey, and it's the language models job to choose the right word in that mix.

It's very easy to just use one language-model (per human-language). However topics of conversation for an individual are not homogenous: I may discuss technical things at work, and talk about the shopping-list at home. Similarly people are not homogeneous: a teenager down the road might say a concert was "sick" whereas I might say it was "great". This is why you need to come up with multiple heterogeneous language models per human-language to really nail things.

I'm pretty sure that Apple's input-to-speech mechanisms are being trained from one body of largely homogeneous users' inputs to form one homogenous language model. This is why it thinks that "Adobe" is more likely than "Dobbs", and why it thinks "Carmel Mountain" is more likely that the address "Carmel Mount".

In short, I think it's a manifestation of Apple employing machine-learning in a mediocre way.

One reason Alexa worked so well at launch is that Amazon early on decided to have special classes of placeholders (e.g. words, names, address-parts) so it could employ variant language models in the right place. This helps with Siri style interactions. I imagine they and Google are looking seriously at heterogenous language models.


My take on the crap autocomplete, and its always been this way, is that ios does not fix bad typing, it fixes what it sees as unreadable dictation - it treats typing as dictation, not typos.
T9 worked as if you had miss typed, like an i instead of an o, you know, typos!

PS ios really is becoming dumber for the user and uglier too.
Mildly depressing in a kind of less and less important way.


Poor autocomplete on iOS was one of the reasons I remember trying out Android a few years ago (~ iOS 7). Haven't switch back since. In my experience Google's implementation is much less invasive, doing the "right thing" most of the time.


This definitely predates iOS 13. I can even give a specific example, that hits me over and over again: I'm usually unable to successfully type the word "range" without autocorrect changing it to "Ränge," the last name of a former colleague. Amazingly, it does this even though my phone doesn't contain any email exchanges with this person. She's not in my contacts, nor can I find any trace of her in Messages or any other system apps. In fact I have no memory of communicating with this person in any way in at least six years. We are FB friends, and that's pretty much it. But somehow, iOS has decided that this last name is very important to me and should always be used instead of the word "range," and this has been going on for years.


Similar, if opposite, example to @Jack: autocomplete regularly fails to capitalize names that _are_ in my address book. iOS used to do the right thing here automatically.

In the past couple years I've noticed autocomplete making cleverer substitutions—like changing a word based on context after I've typed a couple more words—for an overall stupider outcome. Sometimes it works great. But it regularly substitutes the wrong word for reasons that are impossible to discern, so I find it less predictable.

I assume that somewhere along the way Apple added a layer of ML-driven predictions? Regardless of the reasons, it's made typing more frustrating than the earlier days.


An update: I was able to return autocomplete to pre-iOS 13 levels of inconsistency by turning off the swipe keyboard. Prior to doing that, sometimes the keyboard would interpret swiping instead of tapping, and then decide to insert one or more guessed words ahead of what I was typing, which was almost always wrong or just completely nonsensical.

Leave a Comment