IT departments and the large IT consultancies often have difficulty staying close to customers. Layers of customer representatives, project managers, and business analysts try to translate requirements between customers – the people who use systems – and developers. Marketing teams, sales teams and account managers may further prevent developers getting direct access to customers.

This lack of direct access to detailed customer data is the number one reason why large IT projects fail. It’s a failure in the requirements phase. Successful systems require User Experience specialists to be working with detailed customer data from observations and interviews. With such data they can see first-hand the customers’ tasks to be supported, uncover problems, and identify opportunities for improvement. This data underpins rich customer personas and scenarios.

It’s not possible to build usable systems from wish lists and functional specifications ‘gathered’ by IT consultancies: requirements don’t sit around like nuts in a forest waiting to be gathered. Rather, they are embedded in customers’ knowledge, their habits and activities, communication patterns, and pinned to notice boards. They can be found in how people organise their emails, on PostIt notes, and from casual conversations at coffee machines.

User Experience specialists use ethnographic techniques to get at this information. It’s on this rich insight into customers’ tasks that useful and usable systems are based.


After years of working in IT, companies still regularly tell me that they no longer know all the systems they have or what they do. Isolated and fragmented systems are often welded together in unknown ways.

To keep from losing control of systems like this, designers have to respond not with individual features or requirements lists but with a coherent vision. Such a vision should be defined in terms of work practices – people’s roles and tasks – not by pages of textual functional requirements which customers don’t understand and too often don’t read.

A coherent vision sets the scene for the start of requirements elicitation and modelling. It provides a shared understanding of the user groups (and their roles), goals, and assumptions. Sometimes a vision statement is handed down from management, a client, or marketing department. At other times it will emerge from open-ended discussions about new technologies, or as a solution specific to known problems, for example, dissatisfaction with business processes.

It is important to list assumptions because they may have important effects on subsequent analysis and design work. For example, failing to include usability experts on teams will limit attention to and resolution of usability concerns.

A coherent vision is at the core of a coherent system: a system based on the understanding of user roles and their work goals together with the exploitation of new technologies, work redesign, and business change operating in unison.


Are you looking to hire a usability lab in London? Here’s a short list of some. I’ve used Webcredible’s and City University’s labs. Both have viewing facilities with one-way mirrors and Morae Recorder on the PCs.


Today I listened to an interesting interview with AS Byatt, the Booker Prize-winning author.

She suggested that social networks, formed and nurtured in applications such as Facebook and Twitter are replacing religion as way of identifying ourselves. Facebook is being used as a mirror.

In generations past, religions were defined people and societies: what is acceptable, what the social norms are, how to behave, and so on. However, with the demise of religion (at least in the west), we’re now looking at reality as it really is. We’re left to work out how to say who we are.

While the broadcast media – press, TV, radio – help us define ourselves, increasingly social media such as Facebook, Twitter and blogs let us work out who we are. We ask ourselves “How do I differ from this person?”, “Is this something I would do?”, and so on.

With religion playing less and less of a role in our lives, we are coming to learn that you only exist if you tell people you are there.


Usability professionals need to actively listen to customers. Staying attuned to what they are thinking, feeling, and doing is critical.

But it’s easy to be distracted when you’re with a customer – you might feel they’re not telling you anything new, or you’re feeling the pressure from a client who’s observing.

Three attitudes can help you stay attuned with each customer:

  • Bracketing – put aside preconceived notions about your customer, ideas about usability problems,  possible design solutions, and so on.
  • Reflection – paraphrase what the customer has said so that they feel you have really listened. For example, if the customer says, “It really makes me angry when the website keeps logging me out”, you could respond “Being automatically logged out angers you.”
  • Horizontalism – treat each observation or utterance with equal importance, trying not to over-interpret how meaningful or trivial it is. For now, just collect. Analyse later.

Staying attuned, reflecting back to customers what they have said, shows that you have understood them and encourages them to continue.


I‘m happy to see the Nielsen Norman Group delivering a set of tutorials in London in April.

Two years ago I attended their Mobile Usability tutorial which I was impressed with. It had lots of practical guidelines as well as video-footage from user testing sessions.

Two tutorials look interesting this year:

Early registration (by 29th March) is advised – and the prices are cheaper too.


Benchmarking products or websites is a task that usability professionals are often called upon to do. Expert reviews using guidelines is a common technique.

One problem with guidelines is that different experts can interpret them in different ways. For example, “Have a clear link to the home page” is a commonly quoted guideline, but what is “clear” to one usability expert may be unclear to another. I might pass a website on this guideline, but you might fail it. Then what?

This inconsistency risks clients complaining that your guidelines are simply subjective.

Showing that your guidelines can be consistently applied requires different experts to apply your guidelines the same way. In statistical terms, you want strong inter-rater reliability. You can measure the consistency between experts by calculating a Cohen’s kappa statistic.

If you want strong guidelines then you should hire a usability expert to improve the reliability of your guidelines.


There are a number of ways to improve (eliminate, reduce, or speed up) the navigation in your applications, web sites, and devices. The most effective to is reduce the number of choices that you give users.

Here’s an example. Over the past week I have been evaluating a web-based application aimed at small businesses owners who need to manage the email accounts of their staff: adding new staff, removing staff, and so on. Over several sessions I saw users struggling to choose where to start their task on a particular web page.

The problem was that the interface offered two different controls for allocating individual staff more email capacity.  Another control (‘Edit user’) was also chosen by some users as a starting point. For the goal the test users were trying to achieve, only one of these options was right.

Keep the number of controls limited to as few as your users need to achieve their goals. This is the most effective way to overcome this ‘too many starting points’ problem. It dramatically improves people’s ability to stay oriented.

This advice doesn’t just apply to interface controls. “Reducing users’ options” also applies to the number of forms, modes, pages, screens and panels.

So wherever you can:

  • Reduce the number of windows and views
  • Reduce the number of navigation panels – always question the need for more than two navigation areas and one content area
  • Reduce the number of content panels and sections on a page or screen
  • Reduce the scrolling users have to do


Marketing departments are often keen to show videos of managers and directors promoting their organisation. But customers are often reluctant to watch video content. Why?

The bottom-line is that marketing messages often have little value to customers – the messages o

ften have little useful content, nothing that customers can act on here and now.

Video messages are particularly hard to get customers to play. Customers have to wait for the video to load (urgh!) and you can’t scan the video to extract its key points – you need to watch the whole thing. Compare that with text, which is on-screen and easier to scan for relevant points. (See my guidelines for compelling web copy.)

Follow these guidelines if you do choose you to use video for marketing messages :

  • Keep your marketing video short
  • Explicitly state how short the video is (e.g. 1 min 20 secs)
  • The static video-still should be engaging, inviting site visitors to play it
  • Consider using a big, cropped smiling image of the talker in the video-still
  • State the main message of the video in one short sentence


My previous article described the second step of the Discovery Phase of expert evaluations which is a quick pass through all tasks to identify major bloopers.

The third and final step of the Discovery Phase is a detailed analysis of the interface. This is your deep inspection, when you will gather the majority of data for your usability report.

Here’s what you do:

  • Spend 15 minutes reminding yourself of key usability guidelines and models of human-computer interaction
  • Do each user task one by one
  • Note down potential problems as you come across them. For each problem record:
    1. The problem faced by users, e.g. “Users will not notice the home page promotion on right hand side.”
    2. Where and when the problem happens, e.g. “When users land on the home page.”
    3. Cause of the problem, “The video in the centre of page will dominate users’ attention.”
    4. Possible trade-offs considered by developers, “Putting promotions on the right hand side is common practice.”

Top tip: Don’t get bogged down in over-analysing whether something is or isn’t a problem. This is critical – remember that the Discovery Phase is pure data collection. Just write it down. You’ll ditch some of the problems later on in the Analysis Phase. For the moment, this is pure non-judgemental data collection.

Similarly, don’t think about solutions. That comes later too. Just plough through the interface identifying potential problems.