Is your research valid?

Permit me to use an automotive-related example to describe the concept of validity. Imagine that you purchased a new car from a dealership four years ago. This morning you dropped back to them for its annual service. Mid-way through the day you get a call from them.

“We’ve completed the service and your car is ready to collect. Incidentally, we notice that your car is due to undergo its National Car Test next month. When we were checking your tyre pressure, we noticed your two front tyres did not have enough thread on them. They will need replacing if you are to pass the test. By the way, we can do that for you, for €185. Oh, and your windscreen wiper is not cleaning your rear window fully. This might cause you to fail the test as well. If you are interested, we can also fix that for just €50. And one other thing, we recommend you change your brake discs after 100,000 km. Your car already has 105,000 km the clock! It’s just advisory, but if you want to be on the safe side, we can replace them for €550 …”.

At this stage, you might rightly be asking yourself, are these ‘real’ issues, or is the dealer trying it on? If I had left it to another service centre, would they be saying the same? How do I know they are telling the truth? Do I really need to spend another €785 on top of the €99 I am already paying for the service? What are the ‘facts and what is ‘interpretation’ or ‘opinion’? In other words, is what they are saying valid?

Validity is the level at which a study – whether it is a health check on your car, or data collected from a questionnaire or interview – measures what it was meant to measure. For example, if I want my car to be constantly running as good as when I bought it from the dealer, I will want it regularly serviced and maintained. At the beginning of each service, I will want them to ‘measure’ key parameters against which the car should perform, and then ensure that the car continues to perform within them. If this is the case, I will probably consider these points to be valid. In other words, I will treat each ‘measure’ as appropriate and each point as a ‘fact’. I will also then be able to say to myself that the measures are valid, and the outcome of the measures are valid. Therefore, I must get all three things – tyres, windscreen wipers and brake discs – fixed, or replaced, on my car.

If, on the other hand, I simply want my car to be safe, and to pass the National Car Test, then I might question the validity of what is said. For example, is it appropriate that the dealer measure tyre thread depth at all? Is tyre thread depth measured in the National Car Test? What exactly does ‘wiper is not cleaning the rear window fully’ mean? Is the brake disk measure (100,000 km) valid? Who is the ‘we’ in ‘we recommend’? With this mind-set, I take a more questioning stance. In other words, I want to check the validity of everything that is said. This includes, for example, what is being measured, how it is being measured, and who is doing the measuring?

To check for validity, I might double-check the legal requirements for tyre thread depth on the National Car Test website. I might also request a second opinion from a mechanic on the windscreen wiping function. For example, is it cleaning 95%, 90% or 50% of the windscreen? Alternatively, I might consider asking who is making claims about the brake disc requirements?

I might form the view that the first two items (replacement tyres and windscreen wipers) are valid, as I know my vehicle will fail the National Car Test if I do not replace them. However, my brake discs are another matter. The claim that they need to be 100,000 km might be valid using the dealer’s checklist. The car manufacturer, however, might recommend changing at 110,000 km, the Automobile Association recommend replacement at 130,000 km, and I might form the opinion that the brakes are working fine (personally valid). Given the doubts about the validity of the ‘replace at 100,000 km’ claim, I may put off having my brake discs replaced for now.

Or I may ascertain, by checking the National Car Test requirements, that the dealer is trying to make a quick-buck and none of these points are valid. In this case, I will not get any parts replaced, or work done.

In research, we consider data to be valid when two criteria are met. First, it measures what it was intended to measure. Second, it uses an appropriate method of measurement.

Designing research that is valid requires you to continuously ask:

  • Is my investigation providing appropriate answers to my research questions (or hypotheses)?
  • If it is, am I using suitable methods to obtain those answers?
  • Am I measuring the right things?
  • Am I measuring these things in the right way?

What, then, is the key to achieving validity?

Achieving validity with quantitative methods involves being able to establish, as best you can, the facts. Using statistical methods, you may be able to, for example, calculate variances between different sites and ascertain if ‘answers’ are the same, or different. For instance, if the same data results from tests at two different sites, it is more likely to be valid. The extent to which you can generalise from your sample to the population can be validated, in part, by providing an appropriate figure for statistical significance alongside a core statistic such as a total or an average score.

With qualitative methods, validity involves making a rational judgement (not an opinion) about applicability of your analysis and findings when transferring from your specific research context to another context. There may be different interpretations (other service centres, another mechanic’s and more importantly, yours, as per the example above) to be considered. You will need to back up any assertions made by explaining why they are valid (for example, a worn-out disc brake is an engineering certainty).

For both quantitative and qualitative methods, some form of triangulation is advisable. That is, collecting data that provides the same or similar answers from two or more sources increases validity among other things. A simple technique used widely in research is to get a second opinion, to question groups, rather than individuals, or to review results with a panel of experts.

In addition, you should always apply a sense check. Ask yourself: does my answer look/sound right? Obviously, the answer will always involve some element of interpretation, but you can rely on your experience and expertise (or that of others) to guide you. Finally, if you find that something is invalid, you should explain why this is the case (that is, clarify your reasoning).

While in most business and humanities research it is rare to be able to ensure 100% validity (as we are often dealing with shifting naturalistic, personal and social phenomenon), we should aim to make our research as valid as possible. This is achieved by ensuring that we are measuring the right thing and measuring it the right way.

Thesis Upgrade’s practical research guidance can help you measure the right things and measure them the right way.

Leave a Reply

Your email address will not be published. Required fields are marked *