In the Discovery phase of expert evaluations you are going through the interface looking for usability problems.

The first step of the Discovery phase is to evaluate how easily users can orient to to the website – their first impressions really.

The second step of the Discovery phase is to walk through the interface, getting a feel for the flow of how easy or difficult is is to do specific tasks.

In this second step, you’re looking for major bloopers that users will stumble on – the big problems that you can lose sight of when you get into more detailed analysis. So this step should be relatively quick and free-flowing – it should be done in minutes probably.

Here’s what to do:

  • Identify the key user tasks
  • Try to do each task, one by one
  • Quickly note down major problems or points of confusion that you encounter
  • Don’t think about solutions (want to know why?)
  • Don’t get bogged down in details – just keep going.

And that’s it. Quick and free-flowing.

Expert evaluations should separate the discovery of usability problems from the analysis of these problems and subsequent thinking of solutions. It makes for clearer thinking.

The first phase, the discovery of  usability problems is a two step process:

  1. Getting a feel for the interface
  2. Inspecting the interface in detail.

I want to flesh out the first step a bit – getting a feel for the interface.

Imagine you’re evaluating a website. In the first step you put yourself in the mind of a user coming to the website for the first time.

“What can I do with this?”, is the users’ key question. Their underlying question is a hunt for value – how does this benefit me?

More specifically users will:

Anticipate hopes and fears

  • Before going to the website, what functionality and content will users be expecting?
  • What will their hopes and fears be?

Form first impressions

  • Will their first impressions match these expectations?
  • What else will they think when they look at the interface for the first time?

Orient to goals

  • What is the website all about – is the proposition clear?
  • Are there 3 or 4 clear calls to action that are obvious starting points?
  • Are these calls to action appropriate – tasks that users really need or want to do?

Interact opportunistically

  • Do the top level pages help users work out what it’s all about?
  • What happens if they play with interactive elements (e.g. videos, carousels, maps)?

These are questions that you can ask in the first pass of the interface. In subsequent phases you inspect the interface in more detail.

Project teams need time to digest and reflect upon your usability testing results.

I recently ran a workshop with a project team in which I presented usability testing results. That presentation was the first half of the workshop. In the second half we worked on specific design changes. Both sessions went very smoothly and we had really positive outcomes from our efforts.

But  one thing that struck me was the discussion that the project team needed after I’d presented the usability testing results in the first half of the workshop. They needed time and space as a team to form their own shared understanding of the results they’d just seen. They didn’t want to jump straight into hard-core design-thinking. They needed time to digest and reflect on the findings – to ponder what the key issues were and mull over tricky issues.

And they also needed time to check the health of the team, as if saying “Wow, some of that was tough, are we ok… yes, good, let’s move on then.”

So present your results, and then allow plenty of time for the project team to digest and discuss the findings together. Let them come up for air.

Instead of using the mean to report average task times from usability evaluations, you should use the median or geometric mean, according to new research by Jeff Sauro in Denver, USA.

The most common way to find the ‘average’ task time from a set of users is to calculate the mean. But average task times based on the mean are too high because there are usually one or two users who run into problems on a task and skew the results.

For example, if you have collected the following task times from five users:

100, 101, 102, 103 and 104 seconds, then the mean is 102.

But if a 6th user ran into problems and took 200 seconds then the average task time becomes 118. However, 118 seconds is not a true reflection of how users as a whole performed –  all five of the original users were below this ‘average’!

It is generally safer to use the median. Here the median is 102.5 for all six users – a much better measure of how the users performed.

But Jeff Sauro’s research shows that you’ll get even more reliable averages if you use the geometric mean. In Excel it’s the geomean function. With the the six users above, the geometric mean is 114 but for real usability task times (as opposed to the trivial data above) Jeff has has shown that the geometric mean provides better estimates of the true average even than the median (and certainly better than the mean).

By choice I use Saros or  Fieldworks in London for recruiting participants for user testing or research.

I’ve used them both for years and they typically respond well to a brief and keep me well informed of how recruitment is progressing.

Other people have mentioned to me that they use:

Let me know how you get on with any of them.

Silver projects are important to your company, but not as mission-critical as Gold projects.

Silver projects may make up the bulk of your projects and include those which:

  • Do not affect your customers directly
  • Do not affect primary personas
  • Affect secondary tasks or lesser-used functions
  • Address low-level interface issues rather than structural issues that really impinge on the user experience

Examples of Silver projects: Staff intranet, advanced functions used by expert users, account maintenance pages, tidying-up the visual design of a few pages.

While these projects need serious usability support to get right, they are not as business-critical as Gold projects.

There’s two common approaches to supporting Silver projects:

  1. Put mid-weight or less-experienced staff on the project (saving your best practitioners for Gold projects).
  2. Provide the critical structure of the design, for example, sketched paper prototypes. Then allow other developers to design the detail and have your trained usability practitioners review it and give guidance. This puts usability staff in control of critical decisions without taking all their time and allows coaching of less experienced developers.

On low-priority Bronze projects (e.g. simple pages that impact few users, or where it’s difficult to go far wrong) you should provide an evaluative function. This means that non-usability specialists do the design work, while your team provides advice and guidance during design iterations. In this way, you can support an entire project with just a few days from one practitioner.

As the manager of an internal User Experience team you need to prioritise the projects to which you assign your staff. If you give a bit of help to all projects you’ll spread your staff too thinly – and the business-critical projects will suffer.

Note pad showing who's working in which Gold projects.

Prioritise your projects into one of three levels, Gold, Silver, or Bronze.

How to identify Gold projects

  • Are mission critical to the business – they will make a big difference to your company’s on-going success
  • Work on tasks or functionality needed by lots of users
  • Fix big problems that really hit the bottom line: usability headaches that hit lots of users lots of times
  • Stakes are high – improving success rates, satisfaction, or efficiency will bring in big money or mitigate severe risks.

Examples of Gold projects: Developing personas, prioritising functionality on the home page, product search and comparison tools, removing barriers in the check-out process.

Put you best staff on Gold projects. If you’re understaffed and quality and timelines are critical, get outside top-notch contracters in.

Next we’ll look at Silver projects – the important but not critical ones that make up the majority of your projects.

It’s funny, reflecting on what you learned on a project, what seemed to work, what felt right.
Sample from an affinity model showing some post notes with data.

Yesterday I finished a five-week user research project in the employment and recruitment area, working alongside another researcher, Ash. I thought back how we’d tried hard to establish a rapport with the client at that first meeting, listened to his problems, his anxieties, his hopes.

I seem to go through the stages of user research almost without thinking, guided by years of experience I guess. Interview the client, collect all related documents, find out if anyone else in the organisation is working on a related area, speak with internal staff – all the initial wide lens work  to gather all the thoughts, issues, and materials of potential relevance.

With a sharper focus, in the next stage of our research, our ‘real’ interview questions were chosen by hypothesising and proposing issues that could be explored. We interviewed 29 participants. The whole process is one of clarification, of sniffing-out data like blood hounds, of ensuring that the data gathered is grounded in people’s experience. And on it goes. Immersive and satisfying.

But why so satisfying? I believe it’s because we build a model of reality, one that has a coherent meaning to us, something we can believe in, something we can share with others.

More than anything, though, you get that uniquely satisfying ‘Aha!’ moment when all the parts, concepts, and relationships you’ve been working with finally form a Gestalt totality from what is already known.

One of the oldest usability principles tells you  to “Speak the user’s language”.

Are you getting increasingly annoyed at our train companies’ idiot-speak?

In the past few weeks I’ve been to Scotland, Hull, and Sheffield. On each trip a poor “member of the cabin crew” parroted out some pre-scripted drivel. Please have your travel documents ready. My what? Please refrain from smoking in the vestibule-ettes. The what-ettes? Your next calling point is Boxford. The what-point? For your convenience today’s cabin crew are muppets who will be serving a vast array of beverages… and on and on it goes.

Their sterile idiot-speak creates a gulf of affinity between us and them. Words matter – they create meaning and meanings give rise to emotions. In this case negative ones.

We sit there in their hermetically-sealed coffins, not just disengaged from the train company, but disengaged from travelling and, worse, disengaged from life itself.

Oh dear.  And relaaaax… 🙂

Creating a feeling of connectedness to others is one of the elements of a positive user experience. Here I’m talking about creating a user experience in the true sense: stimulating positive mental states in the minds of your users.

A sense of belonging – how we feel and interact with those around us – is important to feeling safe and our well-being. We have evolved to attend and think about being with others who are like us. We seek out others with shared values, ideas and interests to feel this sense of connectedness.

Sharing and wanting to share is fundamental to the very nature of being human. This is one reason the internet is such a major development in human communications – because we are passionate sharers, seeking others like us.