Why exactly do we need user research to inform design? Doesn’t common sense tell us what we need to know?
That’s the kind of provocative question from clients and stakeholders that we always need to be ready to answer.
I’m a former academic and relatively new to the field of user-centred design (UCD). In my previous life the idea that we should conduct research was universally accepted.
It was generally regarded as necessary to understand a given phenomenon and instigate evidence-based change.
That means I’d taken for granted the principle that high-quality research should form the backbone of design projects, with the goal of giving users a better experience.
In practice, that’s not always the case.
The importance of high-quality research
Since joining SPARCK, I’ve spoken to many experienced user research colleagues about this problem.
They’ve told me that research is often rushed or sometimes completely overlooked in many settings and organisations, despite it being a fundamental element of human-centred design.
Some clients and stakeholders simply don’t see its value – and I understand why this perception might have developed.
For example, discovery work prior to a design sprint might involve desk research, the outputs of which align with what the client already knows.
Or it might be that a researcher conducts interviews for a client, with questions that appear simple, yielding findings that, to someone immersed in that world, feel intuitive.
“But I could have told you this three weeks ago,” they might say.
Alternatively, user testing might lead us to consider design features already incorporated in digital products or services from the most forward-thinking organisations in a particular field.
Such instances potentially reduce the perceived value in the design process of (a) research and (b) experienced researchers.
It can even get to the point where a client perceives them as doing little more than supporting the project.
In my view, that side-lining of research is misguided, and there’s plenty of evidence to back me up.
Robust intelligence, effective designs
Not every research study can produce new, exciting and novel revelations.
The very nature of scientific research is to iterate on previously established knowledge, with a focus on supporting existing theories by actively trying to disprove them.
Therefore, a great deal of research won’t tell us anything that shocks or astonishes us.
But it will always, if done properly, provide robust information to guide effective designs.
The idea of conducting research to validate a client’s existing assumptions appears to be commonplace. But there is a great deal of evidence highlighting how our intuitive assumptions are often incorrect.
More plainly put, sometimes research findings directly contradict our assumptions, or myths about human behaviour that have entered popular wisdom.
Some examples of myths regarding human behaviour include:
- Learning styles – there is little evidence for each of us having personal learning styles, such as being a visual learner.
- Right brain, left brain – there is overwhelming evidence against ‘right-brained’ people being more creative and ‘left-brained’ people being more analytical.
- People prefer lots of choice – in fact, we easily experience ‘choice overload’ when given too many options.
When we get findings that don’t align with our current expectations, it’s up to the researcher to try to interpret that in context.
Sometimes we need to change our way of thinking to accommodate these findings, such as by changing our theories.
Other times, we need to limit the confidence in the data we collected, such as acknowledging we have a biased sample.
The ability to make the correct judgement in such instances is a skill developed over years of research practice.
The value of experience
In line with this, professional researchers have a great deal of experience, and that can make the process appear easy.
It’s like when you watch a skilled artist quickly sketch a face with a few strokes, or an experienced digital product designer whip up a prototype of a user interface in minutes using Figma.
Similarly, this is underpinned by years of experience and their knowledge of advances in the discipline.
In the age of the internet, knowledge of any process, such as different research methods, is easily accessible.
In contrast, as alluded above, decision making is a skill that is perhaps taken for granted, as with many forms of expertise.
Some of the early psychological research into expertise examined chess players and found exactly that. The ability to rapidly understand optimal strategies for given chess board positions only occurred with the most experienced players.
That, of course, made it look fast and easy. What wasn’t easy was the many years they’d spent practicing, playing one match after another, and studying the game.
Personally, I would not want someone fixing my car after watching a YouTube video on the problem, with no previous hands-on experience. I would much prefer a mechanic, thanks!
Defining the value of research
Hopefully by this point it’s clear that both the research process and researcher are important in getting quality data to underpin the design process.
The next question is whether we can tangibly define the value of research and researcher in a business context.
Ironically, this feels like an outstanding question because, to my knowledge, there is relatively little research directly examining the value of user research!
As a specific example, there appears to be comparatively little publicly available data that compares income generated, or costs saved, through user research.
The reason for this lack of data seems to be simply that there doesn’t appear to be much benefit sharing this information publicly, even if it exists.
Nor is there any strong incentive to perform live experiments to understand the impact of research, such as A/B testing a research-informed design against an ‘uninformed’ design. Why would you deliberately make live a version of a product you suspect will be less likely to succeed?
Instead, at best, they might use research to iterate on a pre-existing design and determine whether this increases revenue.
This does not account for the full value of research, however, potentially ignoring factors such as the time of designers saved as a result of good discovery or testing.
Some might argue that research can be time consuming, distracting designers from their core work. This might even feel more acutely impactful on the project if the research is being conducted at early stages, delaying the date that designers can even make a start.
Again, there appears to be comparatively little focus on how much time this saves the designer during the process. Not to mention the time and cost involved when having to redesign an (uniformed) product or service that does not meet users’ needs.
Anecdotally, we believe that, though research might intuitively seem at odds with agile workflows and iteration due to initial time costs, it arguably makes up for this later in the process.
The risk of inexperienced researchers
To finish, I’d like to return to that question over the value of an experience researcher.
As mentioned above, an experienced researcher might make the process seem easy and deliver finding which feel obvious. This is arguably why they’re the best people for the job.
Getting inexperienced people to conduct research is a huge risk to the quality of the research and might seriously undermine the investment.
Specifically, an inexperienced researcher might have the impression that that research is simply asking about what people want, instead of investigating what they need.
This is a problem given the large amount of research from different fields in psychology that highlights how difficult people find it to articulate their wants and needs.
For example, many readers will probably have heard of the distinction between implicit (subconscious) attitudes and conscious attitudes. A good proportion of human behaviour is driven by the former.
Also, as previously mentioned, decision making is often hindered when there are too many options – choice overload.
These are just a couple of the numerous drives and biases that can muddy research data, which can in turn be misinterpreted by someone with less experience.
An experienced researcher will know the weaknesses and risks of a given research method, the context in which it is applied, and the data yielded.
They will compensate for that, often without the client even realising it’s happening, when presenting their interpretation of the findings.
This even applies with more objective quantitative data and metrics. Evidence shows how badly humans can perform in the interpretation of statistics depending on the context in which they are presented. A great researcher will often be able to present that complex information in a way that’s more easily understandable.
What can we do about all of this?
Clients and designers
Avoid getting someone who is not a researcher to conduct your research.
It can be tempting to rely on people with a bit of experience, or a keen interest, because it reduces the size of the team. But poor-quality research can be worse than no research insofar as user needs could be misinterpreted, or completely missed.
And be patient. Getting quality data can take time, but it will pay dividends in the end.
Fellow researchers
Do what you do best – conduct research. But also get into the habit of showing your working. Capture data during your projects that demonstrates your worth.
Perhaps log time spent and costs saved, if you can find that data. Expose drafts that demonstrate the effort you put into translating complexity in plain language. And involve people from across the project team in planning so they can understand the depth of thinking that goes into your work.
Beyond your project, why not think about designing and conducting academic studies comparing the impact of projects with and without researchers. It feels to me that there’s a gap in the market – let me know if I’m wrong! It would certainly look good on your CV. And make you a hero among your peers.
Written by Kyle Brown - User Researcher, Birmingham