ia/recon 2/6: Tribal Customs

Tribal Customs looks at IAs contrasted with editors - both responsible for structure, but IAs focused mostly on information retrieval while editors have a broader swath of possible goals for their publications. Don't miss it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Thanks Jess. I am curious

Thanks Jess.

I am curious to know if this resonates with anyone else out there who has worked in the content business. Or even anyone who hasn't.

Why don't others test?

jjg, this second part has a great cliffhanger. The more IA and usability testing and analysis I've been doing, the more I'm seeing that, while these techniques are useful for websites and computer-based applications, non-techie products could benefit from them. Has anyone ever conducted a usability test on the index of a reference book? Would reorganizing the classifieds section make it easier for people to find what they're looking for?

While I haven't worked in the content business, while in school I spent a good deal of time writing, editing and doing layout for newspapers and publications. In my student newspaper experience, the structure was organized according to:
+ what the editor thought was most important
+ how well-written the story was
+ how timely the story was
+ how unique/interesting the story was
+ how long the article was
+ whether or not there were pictures or graphics that went with the story
+ how much space there was available
+ how much space was taken up by ads

Obviously this is not user-driven content or content organization. Visual heirarchies were developed according to article length and ad placement, mainly, with little respect to what readers wanted or what would make it most useful to them.

This brings us back to what you said at the end of part 2, which leaves me in a quandry. Why does the magazine/newspaper editor only rely on judgement, skill, and experience, while an IA or web designer who relied soley on those criteria would be chastised as an impediment (or more accurately an antithesis) to the user-centered design process?

I'm guessing this will be answered and explained in part 3, and I'm very much looking forward to seeing if this will suggest an increased reliance on “professional judgement” for IAs and less emphasis on becoming “conduits through which research findings become structures.” (a.k.a. Less testing and more “leave it up to the IAs — they know what they're doing”)

Good question.

You may well be right that such things as indexes and classified ad sections could benefit from some user testing. I would, however, note that indexing has gotten along just fine as a professional discipline for many years without anyone perceiving a serious deficiency due to lack of user testing.

Also, indexes and classified ads are both examples of structures for information retrieval, and one of the points I'm making in Part 2 (and, in a different way, in Part 3) is that while testing can easily be applied in this limited context, our discipline will inevitably find itself concerned with a much broader range of problems.

This, I think, answers your question of why the editorial approach differs from ours: Because they have a larger set of needs, only a few of which testing can address -- and pretty soon we will find ourselves in the same position.

maturity of the Editorial discipline

you wrote: "as a professional discipline for many years without anyone perceiving a serious deficiency due to lack of user testing."

I think the key is "many years"...last week I was talking with some colleagues about mental models, and used magazines as an example of an 'established' conceptual construct for the reader... I think a large part of the reason an editor can rely on her professional judgement is because she can rely on decades of trial-and-error evolution in print.

I don't think you're saying that there's a direct correlation though - just that as we develop as a field we need to rely less on doing research for everything up front, and just be able to make good decisions. That's a big part of my desire for a "language of user experience" that I've mentioned a few times [I think I may have even droned on about it at the last CHI reception Jesse ;) ] Many others are looking at similar challenges - how can we make informed design decisions that are right, without resorting to user testing to reassure ourselves that we made the right choice?

I'm looking forward to part 3.

cheers,

Jess

splitting hairs

The context might classify the historic writer/reader relationship as "communicating a message to an audience" or as one of several classical forms of rhetoric, among other "information retrieval" structures.

I see the testing issue as moot, since "trial and error" and "hard-won experience" seem to suggest that the idea of reader response, and even the editing process itself, IS a form of usability testing.

Maybe I'm just fiddling with the semantics, but as an IA with a broad writing and editing background (and alas, these days my resume includes these under the term "content development," which I find a barren term), my experience has been that the goals of both CD and IA are much the same, and that the scale of focus is the main differentiator of the job title.

The structure of content

I'm attempting to understand what the relationships are between IR and IA in terms of what you are describing, JJG. Bear with me as I muddle through.

IR is also about communicating
Most IR system rely to some extent on human expertise to achieve their task of making data accessible. Whether you're finding documents in a database or search engine or finding books on a topic in a library with the help of a reference librarian, some human mediation probably helps to get you there. For databases the human mediation comes in the form of the structure of the bibliographic record to describe the books/videos/etc. and to attribute subject headings to those entities based on some standard like LCSH. For back of the book indexes, some human (possibly with the help of a machine) captures the empirical data (names of people, places, etc.) and abstract concepts (subjects) in that book. In search engines someone has defined the criteria for an algorithm that pushes certain (relevant) documents up in your search results.

I think some would argue that IR is about eliciting knowledge -- thus the focus for indexers on knowledge representation. Librarians, for example, help you to produce a question about what your information need is. Then they help you produce a question in the language of whatever system they are using, whether it's Dewey or whatever. Basically, they're giving voice to the user in the terms of questions and then helping give voice to the abstract concepts in things like books by helping to communicate what they're about in terms of a system of classification. At least that's my understanding of what libraries do.

Communicating knowledge to/from the system (interfacing)
I think my understanding of IR maps to what some of us do as IAs. We attempt to predict (or test to predict) the kinds of questions that might be posed to a corpus of data and then attempt to frame the points of access we predicted in a structure (faceted classification, taxonomy, flat structure based on index terms, whatever). We try communicate knowledge to/from both sides of the system. This is the function of the interface I think.

I wonder what you are going to propose will be the role of IA in the scheme of things in the future. We seem to move back and forth in a world of requirements, support from IR tools, support from user testing and feedback, and our ability to synthesize all of these into usable information structures for access to data. Even when we rely so much on user testing we are still the middle man/woman that is making decisions about structure at the end of the day. We're like that content editor. What can we do to harmonize the world of traditional IR and the user-centered approach of HCI/Usability as IA's? Are we already doing what we need to be doing? I think to some extent yes. Maybe I am too task oriented and not big picture enough to see what's coming.

I look forward to hearing what's been brewing in your mind about directions for IA in light of all of this.

-Mike

IR and communication

You are absolutely right that information retrieval is also about communicating. However, not every communication problem can be expressed in IR terms, and this is where I think IR begins to fail IA. Narrative, for example, is a kind of architecture. But it's difficult for me to imagine adapting information retrieval techniques to produce effective narratives. Rhetoric, as mentioned above, is another good example of a form with specific architectural needs that are ill-served by IR.

I see

I see where you are going. Good points. Do you have specific examples where IA is applied to narrative and rhetoric. You'll have to forgive me since my narrow scope of information spaces has been limited to document repositories/databases and collections of these types of corpuses. I'm interested in seeing how IAs structure these kinds of spaces.

-Mike

Narrative and rhetoric

Narrative and rhetoric have been the subject of a great deal of investigation in the academic hypertext community. But these investigations have tended to be so theoretical that they have not yet found their way into common IA practice. The Eastgate site is a good starting point for learning more about this kind of stuff.