Interpreting Clinical Studies: Basic Stats

Oct 26 2011 Published by under Evidence Based Medicine, Research issues

Patients demand access to medical research, but often find it an incomprehensible mess. Even after wading through the alphabet soup of abbreviations and clever clinical trial acronyms, what do those numbers mean? Truth be told, many healthcare professionals and medical writers really do not fully appreciate those numbers. That is why an article in the current issues of AMWA Journal provide such a brilliant service:

 

Click to enlarge

Redfern and Thompson: The risks and hazards of interpreting and reporting health study measures: A simple, practical overview

The authors use some simple examples to define and illustrate the calculation and meaning (see table) of Absolute Risk, Absolute Risk Reduction, Relative Risk, Relative Risk Reduction, Odds Ratio, Hazard Ratio, and Number Needed to Treat. The article is behind a paywall, but I will summarize some of the information here, using my own dataset.

I studied the risk of a kidney defect in mothers with or without an environmental stressor. The control mothers (no stressor) delivered 397 offspring, 48 of which were abnormal. The stressed mothers delivered 316 offspring, 53 of which showed abnormalities.

The Absolute Risk of kidney abnormalities in the control group was 48/397 or 12%. The Absolute Risk in the stressed group was 17%. The Absolute Risk Reduction is the difference in Absolute Risk between the two groups, or, in this case, the 5% increase in risk with the stress. Note that this is merely the difference, not a ratio with anything.

Relative Risk is the probability of an outcome in a "treated" group expressed in relation to the probability of the same outcome in the control group. In my dataset, the risk with stress was (53/316) and the risk in controls (48/397), so the Relative Risk was 1.387 with the stressor. The Relative Risk Reduction (for a treatment that improves outcomes) would be 1-Relative Risk. In this case, the stress increases risk.

As these authors point out, Relative Risk and Relative Risk Reduction are proportional, and the magnitude of Absolute Risk must be kept in mind as well:

A physician may be swayed to initiate a new therapy on the basis of clinical trial results that showed a 50% reduction in outcome compared with standard therapy but may be less impressed if an absolute risk of 2 in 1,000 decreased to 1 in 1,000, even though this also represents a 50% reduction in risk. In general terms, the efficacy of a treatment (in relation to control or another treatment) can be adequately assessed by relative risk reduction, but the absolute risk and the absolute risk difference are needed to provide the context in order to more completely appreciate the effect of a treatment on the population of interest.

The Odds Ratio can be calculated as well to show the odds of an event in an exposed group to that in a control group, so that the ratio is 1 when the odds are identical. The odds of a stressed offspring showing a kidney abnormality would be 53 (number abnormal)/263 (number normal) or 0.20. In my control group, the odds would be 48/349 or 0.14. The Odds Ratio would be the odds for the stressed group divided by the odds for the control group, or 1.465. The Odds Ratio is similar to Relative Risk when the outcome of interest occurs infrequently; however, when the event is fairly common (>10%) then the effect of a variable becomes magnified. In general, Relative Risk is easier to comprehend, but some study designs(such as case-control) will not allow calculation of Relative Risk.

Another useful calculation for clinical material is the Number Needed to Treat, an indirect estimate of risk-benefit. It is calculated as the reciprocal of the Absolute Risk Reduction. For example:

in the Heart Protection Study, a randomized controlled trial with patients at high cardiovascular risk, the absolute risk of all-cause mortality over 5 years was 12.93% (1,328 deaths among 10,269 patients) in the simvastatin group and 14.68% (1,507 deaths among 10,267 patients) in the placebo group over 5 years.19 The absolute risk reduction resulting from exposure to simvastatin is 1.75% (ie, 14.68% – 12.93%). Stated another way, simvastatin reduced the absolute risk of dying by 1.75% (0.0175) over 5 years. The number needed to treat is therefore 57 (ie, 1 ÷ 0.0175). This means that 57 people would need to be treated with simvastatin over a 5-year period to prevent the death of 1 person.

This provides a handy gauge of risk-benefit-cost of a therapy or intervention, but this value must be inextricably linked to the specifics of the given study. In the example above, results can only be assumed for 5 years of treatment. No information can be assumed for other drugs or durations of therapy.

The article includes a nice discussion of Hazard Ratios and Kaplan-Meier curves as well. If you frequently need to discuss the medical literature, particularly clinical trials, this piece provides excellent background on the common statistics.

3 responses so far

  • EEGiorgi says:

    Thanks for this! One thing I find people often miss is the meaning of p-values. A p-value answers the following question: if I were to pick the data completely at random (instead as, say, cases and controls, as if I were trying to measure an odds ratio), what is the chance I would find the same result? If the answer is: 50% of the time, then clearly my result is not significant. But if I find the result 5% of the time or less, that's an indication that my result is not so random and whatever I have measured has indeed significance.

    The other part people often miss in statistics is multiple testing. This is understandable in a way, because it costs money and effort to gather the data, so once you have it you want to find something with it. The problem is that the more tests you perform, the larger the chance to find something just by chance.

    Much better to spend time on the design so that you come up with few but well calibrated statistical tests, rather than do a battery of tests and risk getting a marginal p-value that in the end, after correcting for multiple testing, doesn't have much meaning.

  • Gary Beck says:

    This is a great post. It presents the information in very basic, clear terms. The examples are also nicely stated so it is easy to see how the calculations are tabulated.

    • Pascale says:

      Wow. That means a lot coming from someone with a degree in statistics.
      However, my clarity is derived from the clarity of the original publication, although I have used my own data set for the calculations presented here. I wish the article weren't behind a paywall, or that there was a way to get it without joining AMWA (although I have found access to the journal valuable enough to justify ongoing membership). It starts with a great story about lay publications misinterpreting some of these statistics.

Leave a Reply