Sunday, July 23, 2006

Quantitative Design

Humanized has an interesting article about measuring interface efficiency using information theory (via Daniel Jalkut). It’s an idea that deserves some thought, but for the moment I’m unconvinced. The thesis seems reasonable, but the model doesn’t capture the nuances of the real-world examples and therefore it makes predictions that seem intuitively wrong.

The natural language examples make me suspicious right off the bat. The author does draw a distinction between information and meaning, but subsequent paragraphs sometimes conflate these ideas.

Why is efficiency, defined as the amount of information entered, important? There are so many other factors to consider (including time spent waiting, cognitive load, propensity for errors, and ease of error recovery). My first reaction is that it’s better to bundle all the known and unknown factors together and measure efficiency using the time to complete the task.

In the real world, we rarely start from scratch and try to set a specific time. The initial state of the watch and the shape of the delta matter. Usually, we want to adjust the watch a few minutes forward or backward if it’s gained or lost time. Or we want to leave the minutes alone and shift the hours to account for daylight savings time or traveling through a time zone. My digital watch lets me do the daylight savings adjustment using three button presses. Does that make it more than 100% efficient?

Analog watches also make it easy to make small changes to the time, although sometimes the only way to go backwards a few minutes is to go forwards nearly a whole day. Surely these factors matter, but the model doesn’t capture them.

The article asserts that turning the crown of an analog watch represents 9.5 bits because there are 720 possible times. The way this is presented seems like reasoning backwards to me.

What if an analog watch had two knobs to turn, one for each hand. There are 12 positions for the hour hand and 60 for the minute hand. In the real world, this makes it much easier to set the time because you don’t have to go around and around to get to the right hour. But according to the model, the efficiency has gone down because we’re still choosing one of 720 possible times, only now we have to choose between two knobs, too. After all, the digital watch was penalized for having two untouched buttons while in the quasimode of advancing the hour.

Here are two designs that the model predicts would be good:

4 Comments RSS · Twitter

First, I think his "amount of information is equal to number of bits" explanation is wrong, if I remember information theory and Shannon correctly. Which he actually shows with his "Japenese" comparision: Do japanese books contain less information because they're shorter? Even if they're a translation from an english book?

Second, his "Efficiency lets you know when you can stop looking for a better design" meme is wrong. It's true if you're designing for robots. If you're designing for humans, it's wrong, because humans learn, and even "worse," they constantly forget and thus are forced to re-learn. The most efficient interface is only the most efficient interface for a user who uses the interface correctly. But the whole issue of interface design is that users *never use the interface correctly* and that we thus need to design interfaces which work for people who are "stupid" from the POV of the programmer.

His article is interesting, but in my opinion completely useless unless you're designing an interface to be used by robots.

LKM: I think the information is equal to the number of bits, but not the bits as determined by his 5-bit encoding. This is one of the reasons the natural language example bothered me. IIRC, we can't even know the exact amount of information in a string because the Kolmogorov complexity function is not computable.

I agree with your second point. I should have listed learnability along with the "other factors."

>I think the information is equal
>to the number of bits

Well, it's been a few years since I've had information theory lessons, so I looked it up on wikipedia:
http://en.wikipedia.org/wiki/Information#Measuring_information_entropy

There is no easily quotable part of this explanation, so I'll try to do something else instead. Imagine that you write a letter and store it as a text file. This letter contains a certain amount of information - namely, the text you've written. It's stored in ASCII code, probably. Now, imagine that you zip this file. Now it's not stored in ASCII anymore, it's stored as an ASCII file inside a ZIP file, which is smaller. However, this smaller file contains exactly the same information as the text file you started out with. Less bits, same information.

ah...

>but not the bits as determined by his
>5-bit encoding.

right :-)

Leave a Comment