Get in touch

+44 (0) 28 9099 2882


Myth busters: Thinking scientifically about user research

Myth busters: Thinking scientifically about user research

As a matter of course, psychologists strive to be objective about the subjective. Similar to the steps involved in the study of natural and formal sciences, we propose hypotheses to investigate behaviours and gather evidence based on research to test our claims. In the industry, we like to call this ‘The Scientific Method’. These methods are increasingly being employed to define concepts of human development, scientific reasoning, and even pseudoscientific thinking and beliefs – but what about UX?

The various fields of psychology tend to take a varied approach when it comes to their research methods. In the realm of neuropsychology, researchers are very focused on processes of the brain such as memory storage and retrieval. Dunbar and Fugelsang (2005) carried out research investigating how memory processes are affected when presented with information which challenges current conceptual beliefs, whether it be theoretical, methodological, or empirical.

One area where students have great difficulty in acquiring new concepts is in understanding the causes of the different seasons. Many children, and even more importantly many adults, believe that the reason for the Earth’s seasons is the distance between the earth and the sun at different times during the year. The two most commonly held explanations that students offer are: (1) physical proximity—that the earth is physically closer to the sun at different times of the year, and (2) hemispheric proximity—that one hemisphere is closer to the sun at different times of the year. Many of us believe that the earth travels around the sun in an exaggerated elliptical orbit, and frequently science textbooks display the path of the earths’ orbit in this manner. However, the earth’s orbit around the sun is basically circular. In fact, the earth is slightly closer to the sun in the winter! Even when this is explained to adults, they still believe that there is something about the distance between the earth and the sun that causes the seasons.

Dunbar and Fugelsang asked students to explain their theory of seasonal change – only 25% answered correctly. They then provided the students with a video from the NASA website explaining the correct answer regarding the reason for seasonal change and asked them to complete a follow–up questionnaire. Only one student (2%) changed their answer from the incorrect answer to the correct answer. The video helped only with simple information regarding the shape of the earth’s orbit around the sun, but had little effect on students’ ability to integrate new information about how the angle of the sun’s rays affect the different seasons. Their study (among many others) suggests that rather than engaging in the reorganisation of knowledge that is necessary for conceptual change, the students merely modified their old theory.

In the world of UX, our comparative problem is the issue of confirmation bias. Much like the students in the above study, UX teams tend to become prisoners of our own assumptions – we pick out bits of data that make us feel good because they confirm our prejudices. These prejudices have been fed to us through years of collective study and experience, but in an industry evolving as quickly as ours we need to think critically and scientifically. The ever–increasing list of articles, conferences, interviews and videos on UX best–practice leads us down the path of the confirmation bias again and again.

Far too often these content pieces provide us with nothing more than a list to follow. “12 rules for the UX designer”, “10 questions for usability tests”, “5 timeless skills to read three books per day”, and one very limited approach to critical thinking. Evidence is anecdotal – most of the advice in these lists is based on subjective interpretations of personal accounts, not on systematic analysis. Some lists do draw heavily from research, but academic research is often very context–specific.

In order to get the most from our studies, the process of drawing conclusions must be nailed down by the scientific method – make an observation, form a question, create a hypothesis, conduct an experiment, and analyse the data and draw a conclusion. Conclusions should always be based on the validity and reliability of the measurement, a critical step in choosing your research methods at the planning phase. Remember, your output is only as good as your input. Garbage in, garbage out.

Think critically, keep it lean, and stop reading lists.


Dunbar, K. N., Fugelsang, J. A., & Stein, C. (2007). Do Naive Theories Ever Go Away? Using Brain and Behaviour to Understand Changes in Concepts. In M. Lovett, & P. Shah, Thinking With Data (p. 193). Psychology Press. 

By Matt Jamison

Matt was a UX Researcher with Fathom from April 2018 to October 2019, when he moved on to take up a researcher role at Kindred.

View more insights by Matt

Like to read more of our insights regularly?

Receive our monthly insights newsletter straight to your inbox.

To prove you’re a human please rewrite the following into the box below.

Latest from the blog

Our latest views and ideas

Our Cookie Policy

Find out more I accept

Like most websites, ours uses cookies to make sure you receive the best experience possible. These cookies are safe and secure and do not store any sensitive information. To continue, please accept the use of cookies.