Articles, essays, editorials, white papers

Rashmi Sinha: Persona Creation for Information Rich Sites

Christina pointed to Rashmi Sinha's weblog entry Creating personas for information-rich websites, in which Rashmi proposes a methodology for creating personas that utilizes statistical analysis of user needs and suggests that accuracy is in fact important to persona design. The sugggestion about accuracy is contrary to the tenet in Coopers Inmates... that precision is more important than accuracy. From Rashmi's article describing the methodology, this statement seems important to me,

    Personas for information-rich sites must incorporate input about ways in which people will use complex information domains.
The methodology:

    (a) Use survey techniques (also used in market segmentation). (b) Focus questions around user needs rather than what they simply like / dislike. (c) Identify constellations of needs rather than clusters of users. (d) Use this information as the kernel to build personas around
This was a great find for me. I'm currently working with user surveys and usage statistics to describe the use of a digital library (traditional library measurement). I'm also doing more traditional surveys of user needs to create design personas (design methodology). What I was hoping to do was to use the information-use data to inform the persona development. That way I can provide accurate descriptions of users through personas, which I know is not exactly the Cooper way. There's just a great deal of thrust in my organization to be sure that user behavior that is currently high volume (popular) is not dismissed in any redesign of our products and services. I'm sort of gravitating to Rashmi's model now because of something she mentions in this requiremenet of her methodology, which I agree with, and which should help in bolstering support for personas in my organization, should I use this methology:

    The method should help ground the personas in reality (common critique of personas is that they are based on the designer’s imagination).
Great concepts. I just wouldn't want to break out SPSS to do this, though. I hated statistics in grad school. I suppose identifying constellations of needs is simple enough, though.

The enemies of usability

Peter Morville calls for a unified front in the UX community to take on the Enemies of Usability in his latest Semantics column.

Ranganathan

Is it me, or does anyone else find it interesting that everyone's so interested in Ranganathan lately. Seen in the news aggregator in the last few weeks:

  • Ranganathan for IA's -- Facet analysis is the term that everyone's dying to use. But, the basic idea of facets can be groked in about 2 minutes.
  • Peter V pointed to Fred Liese's article, "Using Faceted Classification to Assist Indexing", which is one of the best introductions to facet analysis and its practical approach in indexing that I've seen next to Louise Spiteri's articles. Liese compares enumerative to facet-based classification, defines facets in simple terms and provides very practical tips for developing your facets and using them in indexing.
  • Prolegomena to Library Classification -- It's more interesting that people are reading this sort of material. I wasn't surprised to find that Peter was reading it. He appears to be making his way through a lot of classification literature. I wonder how people might apply what they learn from examining Ranganathan's ideas around colon classification. I think the general idea here has to do with the flexibility of classification using his system rather than using a rigid system like the Dewey Decimal system. Also I think the concepts behind his system can better be used for post-coordination of classes.
A Taxonomy Primer

I came across Amy Warner's article "A Taxonomy Primer" on her consulting site. Should be a helpful primer for people being introduced to the concepts associated with using thesauri.

Alertbox: Making Flash Usable for Users With Disabilities

NN/G report summary on making Flash usable with MX.

    Flash designs are easier for users with disabilities to use when designers combine visual and textual presentations, minimize incessant movement, decrease spacing between related objects, and simplify features.
Story telling in web design

Victor and Joshua are both talking about story telling as a method for communicating possible actions or paths when interacting with web sites. Victor mentioned an IBM seminar he attended about story telling. Haven't done much reading in this area and would cetainly like to learn more if I can find the most salient literature. I did find Curt Cloninger's A List Apart article, a Case for Story Telling to be interesting as well. Cloninger makes the case for considering the narrative possibilities when designing for the web as a communications medium. He's right, web sites are often not just databases and the design should consider aspects of human experiences with sites not merely as transactional database interactions as such. I like Victor's process of mapping actions or attributes of the narrative to interactions with the site. Interesting. More obvious I guess is the development of the characters, plot, setting, etc. and flowing that into elements of design process -- personas/characters, scenarios.

Google needs people

Peter Morville discusses why Google Needs People and people need Google.

    The reigning emperor of search caused a stir recently by launching a beta version of Google News that features integrated access to 4,000 continuously-updated news sources. Two lines on the main page were responsible for much of the ruckus:

    "This page was generated entirely by computer algorithms without human editors.

    No humans were harmed or even used in the creation of this page."

Truth be told, as Peter relates in his article, without humans, Google results wouldn't be so relevant and undoubtedly, it's news feeds would suffer as well:

    Similarly, the potential of Google News lies in its ability to leverage the distributed intelligence of thousands of editors and reporters. No editors. No reporters. No Google News. Without the continuing engagement of humans, Google is dead. End of story.
And truth be told, most people probably don't care how Google works its wonders, as long as it continues to work as well as it does. What would make a lot of bloggers happy, I'm sure, is if Google went an extra step to making its news results available using some API or RSS syndication. I know Julian Bond did some playing with that, which I'm already using in ia/ news feeds, but how long can it last? I'm sure Google doesn't want to hide its services in such a way.

Gelertner on KM

There's a very good interview with David Gelertner in CIO Insight, in which Gelertner talks about what knowledge management means in terms of computing experiences.

LIMBER project

On ia-cms, Brendan pointed out the LIMBER project. Limber stands for Language Independent Metadata Browsing of European Resources. The project, concerned with the exchange of multilingual metadata, particularly in the Social Sciences, has proposed an RDF schema for thesauri.

A Thesaurus Interchange Format in RDF (delivered at the Semantic Web conference 2002)
http://www.limber.rl.ac.uk/External/SW_conf_thes_paper.htm

RDF Schema for ISO compliant multi-lingual thesauri
http://www.limber.rl.ac.uk/External/thesaurus-iso.rdf

Wodtke on being T-shaped

Christina posted a wonderful essay titled "Leaving the Autoroute" on B&A on the importance of being T-shaped -- having the knowledge and understanding a generalist in your industry would have and the wisdom and experience of being a specialist in your particular discipline (or perhaps within an area of your discipline). I couldn't agree more that this is what makes a thoughtful team member and producer on a project.

User-Centered Design

This month Digital Web Magazine will focus on the theme of User-Centered Design. Kicking things off this week is an interview with Peter Merholz and Nathan Shedroff on User-Centered Design.

Knowledge Management: When Bad Things Happen to Good Ideas

Darwin Magazine is running a story on how a good idea –knowledge management– is dragged down by its execution (poor software, poor implementation). A good read to see how your hard work could be totally hijacked by (and is currently getting a bad rep from) a number of peripheral circumstances.

[The address from the link from above: http://www.darwinmag.com/read/040101/badthings_content.html]

The Importance of Being Granular

Roy Tennant has a pretty good article in Library Journal on how granularity affects retrieval and impacts person-hours in Digital Library collections. Don't get turned off by the library lingo. The message is applicable to non-library collections.

The difficulty of categorization

Peter V pointed to Philip C. Murray's KM Connection article, The difficulty of categorization discusses implications of using categorization in the enterprise. In it he, he cite's Bella Hass Weinberg's 1996 article from the ASIS Conference Proceedings, Complexity In Indexing Systems -- Abandonment And Failure: Implications For Organizing The Internet, to bring up the issue of difficulty in classifying documents from a large corpus of data. Weinberg's article discusses the issues in classifying the Internet. Murray's position is that a corporate body's "three ring binder of knowledge" is not a massive data source, so is not necessarily subject to all of the difficulties that Weinberg mentions. He states,

    I also wonder whether classification experts simply cultivate the perception that classification is extremely difficult. Even manual classification can be done quickly, if my experience with professional indexers is any indicator. It's not unusual for a professional indexer to generate a comprehensive, high-quality back-of-the-book index for a new title in less than three weeks.
He goes on to discuss the advantages of faceted knowledge access at a high level. What I find problematic with arguments that state essentially that classification is not so hard is that there are so many variables at play when we're talking about classification of any kind. These variables can include definition of domain, size and scope of the indexable corpus, and specificity of indexing to name just a few. Providing facets of classification is another level of complexity that begs for some definition of guidelines as well.

But I wonder, are most organizations just concerned with indexing a "three ring binder of knowledge" or are they also concerned with indexing all of the published material -- technical documents, memos, press releases, etc. -- of the organization? Are they concerned with indexing at the level of the document or at a more granular level, indexing concepts within the document. There are a lot of high-level articles floating around lately that give lip service to the value of classification. What I'm interested in are those articles that actually discuss the pain of implementing classification processes within large corporations. If you have citations for any good examples/case studies, please share them!

As part of an information services organization in a large corporation, I've seen the great distances my colleagues have had to go to make an enterprise level taxonomy work for our customers, who have been the catalysts and partners in its development and use. Over the 4 years that I've used our taxonomy on the back end as an indexer and as a site developer -- but not as a subject matter expert creating/defining the terms and relationships of the taxonomy -- I have to say that there is not much about classification at the enterprise level that seems very simple to me. It is very clear that representing knowledge (automatic or manual) is never simple to do, and when done right, can never be always right and never serve everyone. Concepts can change, indexers will represent knowledge differently, environmental elements will affect priorities and sometimes shift the language and understanding of your subject matter. It's all very slippery. That being said, however, without classification it is clear that knowledge retrieval is hampered and the bottom line is affected. And I guess that necessitates the need for information professionals and information retrieval systems.

EII (Enterprise Information Integration)

InfoWorld has an interesting article about the EII space which is all about aggregating information from disparate systems serving data as XML. The Information Aggregation article talks about EII as the middleware that can cull data from multiple systems and repackage as XML for consumption, for instance in consumer facing applications. The article talks about the key players who are trying to establish a presence in this space.

KM on a budget

(Is KM an allowed topic here?)

Knowledge management has been knocked around in my organization for so long with so little understanding of what KM is. On the one hand, there is the belief that everything that transpires in your business is an archivable knowledge asset -- hard copy ephemera such as scribbles on paper napkins or meeting leave-behinds; verbal ephemera such as telephone conversations, chats with colleagues at conferences or at dinners; electronic documents such as email and binary files. In reality, I haven't seen the promise of a tool that allows you to capture all this transferable knowledge and then share it easily, but have heard the promises from vendors over the last 5 years. As the term recedes from everyday parlance in large right-sizing organizations such as my own, the need for knowledge management is still pressing. Which brings me to the The 99 cent KM solution, David Weinberger's short essay on KM World that proposes that low-budget tools such as email list applications and weblogs will get you far.

I'm tending to agree that these tools may be sufficient for a lot of small organizations. My understanding is that Knowledge Management is about being able to communicate store and retrieve knowledge. KM is tool and technology agnostic. In these tight-budget days, I still hear the term kicked around a lot, but I hear less and less about initiatives to research a technology to support KM. I don't know that the low budget tools are sufficient to support KM for large organizations, but they certainly seem like sufficient for creating some knowledge sharing until the killer KM app arrives, no?

Doing a Content Inventory

Jeff Veen's latest in the Adaptive Path essays is Doing a Content Inventory (Or, A Mind-Numbingly Detailed Odyssey Through Your Web Site), in which he talks about the process of taking stock of client data, mostly as a pre-requisite to building/deploying a CMS within an organization. Includes a link to download the Adaptive Path content inventory template. Related to this article is Janice Crotty Fraser's article in Web Techniques last year.

Personas

Peter pointed to some well-written personas for accessiblity that Mark is writing on Dive Into Mark.

Bringing your personas to life in real life

Elan Freydenson in Boxes and Arrows.

    The way you communicate the personas and present your deliverables is key to ensuring consistency of vision. Without that consistency, you’ll spend far too much time arguing with your colleagues about who your users are rather than how to meet their needs.
Content Organization Methods Comparison

I wrote a short comparison of a few IA tools: authority lists, thesauri, and faceted approaches. I kept it pretty simple so that it could be given out to clients. It also includes "full-text search" as a method, since my client was in favor of using just search as an interface into 4000-5000 content items. I was trying to make the case that additional work on developing a thesaurus (at least) would improve the site in a number of ways.

Any comments are welcome. Word .doc, about 90k.

Content Organization Methods

XML feed