ia/recon 3/6: Dressing Up in Lab Coats

Jesse James Garrett's third installment in his IA Recon series. JJG flexes his journalistic prowess in a not-to-be missed call for the IA field to mature beyond a constant dependence on research. My favorite quote: "But by fusing information architecture with research, we risk corrupting our process and undermining the very credibility we seek."

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I think this part addresses s

I think this part addresses some of the questions people had after part 2 (though perhaps not in the way they expected). Part 3 suggests that we will soon be facing a split in the community even more significant than that between advocates for the discipline and advocates for the role. The next debate will be over whether IA can (or should) aspire to a high level of scientific rigor. How do you see this playing out? What are the ramifications of moving away from our dependence on research?

Spreading ourselves thin

I like these thoughts:

    It's not always easy to tell whether a research study defines the problem or defines a solution. ... Testing cannot account for all the possible goals of an architecture or its users.

    If our discipline continues to develop along its current course, we will have developed an entire body of knowledge about information architecture that amounts to little more than a set of tips and tricks for beating the test.

Here's my perspective on how information retrieval system research is done. Research of users' information seeking behavior observe and report the problems. Someone hypothesizes a solution. Separate groups test and retest those solutions (validate or invalidate) for effectiveness in helping resolve the problems identified in the initial research.

The above methodology, I think, is the academic model for testing IR systems. This relates to what we do as IA's. I think, as perhaps you are suggesting, that the testing/research should be handled by usability experts. The solutions (hypotheses), in my opinion, should be handled by the IA's as information and interface experts and be retested by usability professionals again. I'm suggesting separating testing from solution in terms of the people involved and in terms of the process.

I often wonder about the perception of IA as a field where professionals are expected to execute tasks associated with Usability testing. We spread ourselves so thin in trying to attain competence in so many areas. I would rather focus my skills on a narrower domain, such as information organization and interface design, than be a little bit of a usability professional, a little designer, a little bit of a librarian. It's this interdisciplinary approach that we are expected to take that makes it difficult for us to define and refine a scientific approach to anything.


Your description very nicely summarizes the position on IA vs. usability that I've held for the last few years. There are a couple of caveats I'd like to add, though.

The disadvantage to having separate design and testing staff is that the IA then becomes completely reliant on the researcher to interpret findings. When the IA is directly involved, it's easier for that person to apply the research. The question then becomes whether this advantage outweighs the problems mentioned in my essay. (I don't think it does, but opinions vary.)

Also, for some organizations, it's simply a resource issue. If you're the lone IA, and you have a choice between doing the research yourself and doing no research at all, doing the research is probably the better choice. The important thing, to my mind, is knowing what the research can and cannot tell you, and striving to separate articulating the problem from articulating the solution.

This is an area where making the distinction between the discipline and the role (back where we started!) is really important. I think defining the discipline in a way that makes research central to its practice is a mistake; but I think our definition of the role should be flexible enough to allow IAs to conduct research as well (again, provided they are able to do it in a way that doesn't corrupt the IA work).

Research and Implementation = User Experience

I hear your concerns. I think many of us do guerilla Utesting as part of our job (our role). If you are the sole IA and there is no funding for usability people, than that is what you do. In an ideal world, IA and Usability Tester would be part of a discipline called User Experience (or George Olsen's 4I or whatever the hell you want to call it). In an agency or consulting firm, that discipline would be physically close to each other, and people in the IA practice would be peripherally involved in testing, people in the Usability testing practice peripherally involved in UI design and information organization tasks (the stuff of the IA role).

Here's the mini-taxonomy:

User experience (discipline)

  • Information Architecture/Interaction Design (Practice)
  • Usability Testing (Practice)

Discipline, role, practice

What's the difference between discipline and practice to you? I have tended to think of them as being synonymous, but maybe you've identified a distinction that I haven't.

At any rate, I am not so much interested in articulating the ideal team or organizational structure as I am interested in figuring out where we should concentrate our attention in order to advance the discipline (recon, get it? :) ). I'll leave the designation of roles to corporate organization and process experts.

What I meant by discipline & practice

JJG, maybe I'm just grasping at straws to try to propose the ideal relationship (as it exists in my head anyway)?

I used the terms discipline and practice to differentiate between academic training and role-oriented task. This may not fit my description of the relationship perfectly, but I was looking for a way to place the IA and Usability professional in the same room, without having to do the same task. Discipline refers to what you are based on what you've learned. Practice refers to what you do. That's not all together suitable a definition of UX = IA + UT, but it's a try.

In any case, it was to suggest that perhaps we should concentrate less on UT tasks if possible, and that those tasks could feasibly be placed in a separate role with close ties to the IA. If it's feasible, hire a UTester, if not do light Utesting and hire a Utesting consultant for heavy lifting.

Research and Implementation = User Experience

One reason I separated user research and usability out of the 4Is -- (content) IA, interaction design, interface design and info design -- is precisely because you can do the 4Is without them. Should you? That's a different question.

But I think the key difference is that they're about design (or architecture, for those who prefer that term), rather than research, analysis or testing. On the front-end of the life cycle, that's one thing that separates us from user researchers and business analysts, who do similar requirements gathering, but generally don't get too deep into actually designing what the system (to be generic) does or what it looks like.

Likewise, it's better not to test your own work, but in terms of analyzing the test results, arguably those who have an understanding of design principles are likely to do a better job of it. That's a weakness I see in the usability community. As Alan Cooper said, the reason they keep asking whether something works is because they don't really know themselves. Is that an overstatement? Obviously yes. But there's an unfortunate amount of truth in it.

I think the comments about this being a craft are dead-on. Craft is a mix of art and science. And it's the creative leap of designing something that separates it from research, analysis and testing. Those who advocate a white-coat appropriate seem to forgot that even within the sciences, you have to create a hypothesis at some point. Our hypothesises are tangible. And just as in the sciences, both deductive and inductive approaches work.

The other point about craft is the idea of the craftperson, whose experience brings intangible knowledge that's extremely hard to "quantify" into rules. Crafts professions recognize the value of this experience and skill by deeming practitioners apprentices, journeymen or masters. Speaking of white coats, there's a good analogy to be made to the medical profession. Yes, there's a good amount of science that can be taught. Yes, you can use testing to help assist forming a diagnosis or confirming one. But much of it comes down to the skill and experience of the doctor.

So in answer to JJG's question, I think our profession (whatever you want to call it) is best served by focusing on the design aspects of structuring interaction and information. User research and usability are tools used to support that design. In larger organizations, they may be separate people, in smaller ones, they may be a secondary role that we play.

'Course there's the larger issue the perception of "usability" = "user experience," which is why I think IAs/IDs are expected to do usability -- which often really means usability testing, rather than the broader notion of "usability engineernig."

The craft of the I's

Great discussion, George. I agree that separation of research and implementation is ideal. I like how you focus on the word craft and liken the development of your craft to stages of professionalism. I see nothing wrong with focusing on design and letting usability be done by someone else. There is, of course, value in understanding the principles that come out of HCI and Usability research, but the expertise that comes from experience in design and in architecting information structures is why I come to play everyday -- not to do usability testing. My opinion is that usability is a component of the user experience, but usability testing results should be factored in post-design. It is the thing that informs re-iteration and refinement of design. It need not inform initial concepts for design.

Usablilty specialists and designers

In a soon-to-be-published interview of Jakob Nielsen, Nielsen made an interesting analogy that the role of a usability specialist should be similar to that of a editor to a writer. The writer ultimately does the creation, but works with the editor -- both during story development (user research) and during editing -- (usability testing) to make it better.

I think the analogy is more profound that Nielsen may have realized, since (having come from the writing field), being an editor requires a good understanding of the craft of writing. I think much of the conflict we're seeing is due to a feeling that usability specialists don't have a good understanding of the fields they're critiquing. Certainly traditional HCI has focused on behavior and never really dealt extensively with form and content, which has lead to some serious blindspots.

Now given the scope of user research and usability is much broader than an editor's, it's unrealistic to expect these folks to be expert in form and content, but I do think these professions need to expand their core knowledge to include a better understanding of the basic principles in the fields of form and content. Just as those who focus on design need to have an understand of the basic ideas within user research and usability.

Let us know when that Nielsen

Let us know when that Nielsen article gets published. I'm sure a lot of people would like to read it.

what about occasionalism?

If we take the comparison to the SAT prep course, aka the "vocabulary and reading" course, .... I understand that jjg is providing a new application of the age-old complaint that such a class teaches to the test (and by comparison that IA solutions "teaches to the research", right?) And that as the test or research heuristic is flawed, or just based on expectations, the results are flawed or destined to bear out expectations. So that just as high-scorers on the SAT neither prove nor disprove the validity of its testing content or testing method, a high number of gaily efficient users can't completely validate the efficacy of a site architecture. (However you define "efficacy").

But this is mainly a problem of purported validity, right? It's the problem of conflating a common goal (a high verbal score on the SAT) with the means to reach that goal. We may posit, for simplicity's sake, two groups: one who wants to integrate vocabulary and comprehension concepts -- to really learn them -- thinking that this integration will boost their scores; and another group whose scores are boosted by beat-the-test reviews that have nothing to do with learning. High scores may be the goal, but the method of reaching that goal hasn't been measured. And unless the underlying goals of integration versus beat-the-test are brought forward, there will be arguments over who is smarter, what the test shows, etc.

Is this differentiation (to yank the metaphor back to site development) related somehow to problems measuring occasionalism with regard to site use? Are the heuristics applied to measure efficacy or validity of design just so flawed or one-dimensional that we map responses from research into answer sets and translate them into flawed, one-dimensional sites?

Maybe I'm wrong, and occasionalism is just another facet of a heuristic or survey that, once measured, will lead to an infinite number of new facets, each needing refinement.... But occasionalism seems related to this discussion, at least in light of the test-validity metaphor used.

I hate subject lines

I'm not sure what you mean by 'occasionalism' in this context, but overall I think we're in agreement. In particular, this...

Are the heuristics applied to measure efficacy or validity of design just so flawed or one-dimensional that we map responses from research into answer sets and translate them into flawed, one-dimensional sites?

...is precisely the concern I am raising. In short, I think something is lost when the qualities of an architecture are reduced to that-which-can-be-tested-for. What are those qualities? How can we develop approaches that address them? We won't have any way to know if we continue to rely on research to tell us where, as a discipline, we should direct our efforts.

the naming of parts

I find that the reductio ad nauseum (or is it absurdum? you decide) of good design to a pie chart ignores certain qualitative factors. Some of these may be ineffable -- maybe I'm thinking of a "soul factor" here that is more than a simple sum of the parts of a site.

I know someone is going to come up with a "soul" heuristic, and I understand that a host of measured and measureable design decisions can create a site's feel. But there's something else.

Hmmm it's the Pygmalion/Galatea metaphor. Stone transformed into warm flesh. IAs as lesser gods, transforming boxes and arrows into soul.

Okay, back to sane language: how to address these qualities without killing as we dissect? Possibly getting back to talking to individual users -- not as units of a group to query and plot into research documents that show their general proclivities -- but as individuals? Move away from a user persona (that strange amalgam) and get down with the living breathing article?

personas and craft

My reading of the "persona" approach is that it IS based on interaction with the living, breathing article. It's a process for organizing all of the information I gather in such a way that it is easy to see the patterns and design to those patterns. Otherwise, all I have is a big pile of idiosyncrasies which are difficult to get my head around, let alone apply in any coherent way to my designs. I don't follow the Cooper approach to the letter. But I have found the whole concept of personas to be a powerful design tool - a kind of "role model", alongside task models, content models, and so on.

Regarding IA as a craft... One reason why this idea resonates with me is that it allows individuals to work in any number of styles and techniques, while advancing the craft as a whole by sharing examples and ideas. We don't have to spend so much time defining terms and deliverables and roles. Instead, we focus on sharing techniques and ideas, and our craft will evolve organically, taking on a clearer shape and identity over time.

personas vs persons

...But there can be huge differences between individual users and the many-to-one mapping. You have a body of users whose shared characteristics produce a persona, versus an individual user whose characteristics haven't been prodded into a pattern yet. I agree that the sharper outlines of idiosyncratic behavior aren't easily transformed into workable designs for large-scale applications. But periodic visits to individual users on the edges of the characteristic map might breathe fresh air into our designs.

And the thread goes on over at SIGIA

Follow the discussion on the SIGIA archives.