February 07, 2002

False prophets of Usability - Part 1
In the past few years usability has become somewhat of a buzzword. That's both a good and a bad thing. What's bad is that the meaning of usability gets muddied, re-interpreted and sometimes even set aside. You have software companies selling "automated usability testing solutions" (no one really sells products anymore). You have traditional graphic designers passing themselves off as "interaction" this and "user experience" that. And of course most of the "buyers" don't know what is what -- they're just buying instant 'usability" -- or so they think. I'm not saying I'm perfect, but there are clearly some folks in the "usability game" who are just jumping on a bandwagon and bringing their own version of snake oil with them.

Here's an example:

NetConversions has some pretty neat software to sell. It's mostly event logging / data capture type of stuff. It could be very useful to a business or a usability engineer doing research. But, listen to this claim from the NetConversions site:

"True Usability™ is an innovative and rapid data driven method to test for website usability using your actual visitor traffic to improve the user experience and to optimize bottom-line results." [November 2005 Update: Netconversions is now "Atlas" and is still offering True Usability™]

Note that they've co-opted and trademarked "True Usability" as their product name. Of course this really has little to do with "truth" or "usability" as you'll soon see. They have a "white paper" on their site claiming to compare various "approaches to usability": heuristic evaluations (aka "experts"), focus groups, surveys, server logs, and of course "True Usability™". This supposed comparison has more errors in it than I can cover here, but here are the highlights:

Supposedly a heuristic evaluation is "an analysis of the site with respect to a set of usability guidelines. These usability guidelines are often based upon the expert’s past testing and consulting experiences." -- No, a heuristic evaluation is based on heuristics. A guideline review is based on guidelines, and most "experts" don't create their own guidelines from scratch. Clearly they don't know what they are talking about. Looking at their "about us" page, there isn't a person listed with a background in HCI, Usability Engineering or User Centered Design -- instead they are technologists and marketing data research types. Don't get me wrong -- I like those types of people, but I know usability professionals when I see them, and I don't see any at this company. If they're really selling "usability" expertise, shouldn't they show their qualifications in the field?

The "comparision" opens with a very nice quote about usability testing from the Industry Standard.-- evidently to promote NetConversions' product. Of course the types of usability testing Usability Engineers have been doing for decades -- involving humans observing and listening to users isn't even mentioned in this comparison. It seems anything that can help you evaluate usability of a system is now considered "usability testing".

Unfortunately NetConversions isn't the only company doing things along this line, and even many usability professionals can fall into the trap of promising to solve all of a customer's problems in one fell swoop. Usability (as a field) suffers when "experts" don't meet expectations. We have to set realistic expectations with clients, help them select the tools, methods and approaches that meet their needs. We can't over-promise and under-deliver. Perfect usability is a goal never attained, but great strides can be made in that direction if businesses, designers and developers work together.

There's plenty of bad usability out there -- more than enough for all the traditional usability professionals and the new entrants to the "usability game" alike. We can work together -- tool-makers and practitioners, researchers and designers, marketers and engineers. Let's just quit promising that we can get companies to the moon on the next bus leaving town.

Be realistic, helpful and truthful...and maybe Real Usability™ will happen.

February 05, 2002

Megway TH. Bigger than Jesus
This is amazing! I gotta find out where to get me one of these.

"Due to its unique Opposable Digits Technology™, each Megway is capable of carrying up to 120 pounds in its front cargo compartment in the form of either boxes or bags. The optional broom, mop, and floor buffing attachments (not shown) turn Megway into a powerful cleaning machine.

The advanced technology behind the Megway is really cool.

Thanks to Jason Kottke...great stuff.
Don't let your web site fall out of the window
The question of whether to open a new browser window for offsite links came up on CHI-WEB. Lois Wakeman has a nice article about why this generally presents a problem for users. She includes some good examples and usage scenarios (see the sidebar). Personally, I've seen every one of the scenarios she lists occur with users in usability testing. I've seen users with 6 or more windows open (unbeknownst to them) as well as users who accidentally close their only browser window thinking it was another popped-up window. She also covers the topic of popup (or pop-up) windows with a nice working example.

I'm still looking for some guidelines that talk about good uses of popups. If you know of any, please drop me an email and I'll post a link here. I think areas where popups MIGHT be useful are user assistance (e.g. help, glossary, etc.) and showing full-size images when browsing thumbnails.

Generally I avoid designs that open new windows for users. I've yet to find a web user who couldn't find the Back button or some other way back to the previous site when needed.

February 04, 2002

Faceted Classification Example
I've been trying to get my head around the concept of "faceted classification." This topic has been discussed a bit lately on the SIGIA list, and I wanted to understand what's different about
"faceted classification" from my previous understanding of "classification". FacetMap is an online example and tutorial on faceted classifications. I guess I've never thought of web "taxonomies" or classification systems as using strictly mutually exclusive categories -- where items only fit in one location. Maybe this is why facets don't seem to be a big revelation.

I think providing the user "multiple paths to success" is important whenever possible. For example, "Salt Products" might fit under both "Food" and "Industrial" categories since there is table salt and road de-icing salt. On an intranet, one user might think to look for an organization chart under "about us", while another user might look under "contacts". Where they look might depend on the task they are trying to accomplish at the time. I seem to run into examples like this all the time. Sometimes you can get away with supplementary navigation to get the user to the one spot where something resides (e.g. "Related Links"), but other times you have to actually place the content/item in multiple places.

So there is value in being able to distinguish between "faceted classification" and "hierarchy" or "taxonomy" -- "faceted classification" is a more specific term -- it lends clarity to the discussion. I also like the fact that conceptually "facets" should be determined by user needs. This helps focus the information architect on the user, rather than just on the content.

Read more about faceted classification at PeterMe:
- Innovation in Classification
- Fa-Fa-Fa-Fa-Fa-cets!