A new study by Senthil Selvaraj and two colleagues suggests that newspapers do not publish the best available
studies. In medical research, the main criterion of a good study is
whether participants were randomly assigned to receive either the
treatment or some control procedure such as a placebo. In medical
jargon, this is called an RCT study, which stands for
randomized controlled trial. The major alternative is an
observational study, in which the participants are contrasted
with a comparison group that may differ from them in uncontrolled
ways (a cross-sectional study), or are compared to themselves
at an earlier time (a longitudinal study). Some observational
studies are merely descriptive and lack a comparison group.
© mercatornet.com |
The authors selected the first 15
articles that dealt with medical research using human subjects
published after a predetermined date in each of the five largest circulation newspapers in the
US. Referring back to the original research reports, they classified
each study on several dimensions, the most important being whether it
was an RCT or an observational study. For comparison, they selected
the first 15 studies appearing in each of the five medical journals
with the highest impact ratings. These impact ratings reflect how
often studies appearing in these journals are cited by other
researchers.
The main finding was that 75% of the
newspaper articles were about observational studies and only 17% were
about RCT studies. However, 47% of the journal articles were
observational studies and 35% were RCT studies. A more precise
rating of study quality using criteria developed by the US Preventive
Services Task Force confirmed that the journal studies were of higher
quality than the studies covered by the newspapers.
They also found that the observational
studies that appeared in the journals were superior to the
observational studies covered by the newspapers. For example, they
had larger sample sizes and were more likely to be longitudinal
rather than cross-sectional.
In one sense, these results are not a
surprise. We could hardly have expected newspaper reporters to be as
good a judge of study quality as the editors of prestigious medical
journals. The authors, like many before them, call for more scientific literacy training for newspaper reporters, but it's hard
to be optimistic that this will happen.
What criteria do the reporters use in
selecting studies to write about? I was struck by the fact that
observational studies resemble anecdotes more than RCT studies do.
In addition, the newspapers chose observational studies with smaller
sample sizes. These results could be driven by the base rate fallacy—the fact that the average person finds anecdotes more convincing than statistical analyses of much larger samples. In
fact, the lead paragraph of these stories is often a description of
some John or Jane Doe who received the treatment and got better. The
results could mean either that reporters fall victim to the base rate
fallacy, or that they think their readers are more interested in
anecdotal evidence.
You may also be interested in
reading:
No comments:
Post a Comment
Comments are always welcome.