Better Births and Babies
[Home] | [New Stuff] | [Birth] | [Bestfeeding] | [Good Links]

Demystifying Medical Research.

"Owl," said Rabbit shortly, "you and I have brains. The others have fluff. If there is any thinking to be done in this Forest--and when I say thinking I mean thinking--you and I must do it."
From a conversation between two great intellectuals in The House At Pooh Corner.

It can be a little intimidating at first when you start looking at medical research, but there are a number of good reasons to go straight to the source:

It's not too hard to delve into the medical literature yourself, you just need a few tools:

I can't help you with the first two items; you'll need to find them on your own, but I can help you understand what it is that you're reading and whether it really means anything.


Anatomy of a Research Paper.

A research paper usually starts out with an abstract, which is a summary of the results.

The next part of the paper usually is a history (which sometimes is rather long) giving you a synopsis of current thought in the field and results of previous research. It also tells why the researchers felt the study was necessary.

Then we go on to the methodology section, which details how the study was designed and what the researchers attempted to accomplish. This can be a very confusing section, as here is where you'll find all the statistics-related terminology, but it's important to look at to see if this was a well-designed study. This reflects on whether the results were valid or reliable, which we'll cover in a bit.

Then there is a results section, often with lots of handy tables of information. Here again, read the text! All the information cannot possibly fit in a table. This is the section with the most medical terminology, so have that dictionary handy!

The next section is a discussion or comment section, where the researchers discuss the results and where they feel additional research is needed. There's almost always more research needed!

Finally, there is a bibliography which lists other studies that the researchers consulted.

[top]

Important Factors in Research.

How was the research done?.

Types of Research Studies.

The final type gets a section all to itself:

True Experimental Design.

The true experimental design has specific characteristics:

Some types of true experiments are:

If the caregiver knows what group the subject is in they may unknowingly (or knowingly!) alter the care they give and therefore bias the results. An example is a study that was done to see if a treatment for gestational diabetes effected the cesarean rate. The problem was that the doctors delivering the babies knew whether the mother was in the experimental or control group and if the doctor believed the treatment was effective, he was less quick to do a cesarean, thereby decreasing the cesearean rate for that group.

Study Variables.

The things research studies are interested in are called variables. Variables is the term used for both the thing that is suspected of causing a particular result, and the result itself. Say you're wanting to study how light periods of different length effect the germination of a certain type of seed. You make sure all other factors (moisture, type of seed, growing medium, temperature) are all the same. So the variables are the length of light (in this case the independent variable because it is altered on purpose) and the rate of germination (the dependent variable since it depends on the amount of light).

[top]

Choosing Participants.

There are several ways researchers select participants for studies:

Probability and Significance.

If you toss a coin an endless number of times, statistically you have a chance of it coming up heads 30 times in a row. How do you know that the results of a study weren't arrived at by sheer luck (or lack thereof)? Researchers have ways of determining the probability that results were a product of chance.

The most common way (and I admit the only one I really understand) is satistical significance. You'll see this referred to as the "p value" with a number, such as "p=.01". If you convert the decimal to a percentage, you find that p=1%. This means that the researchers figured out that there was a 1% chance that the data was the result of some sort of fluke in the study. Results are considered statistically significant if the p value is smaller than or equal to .05, or 5%. The way they figure this out is by looking at the raw data and seeing the range among each of the groups. The more tightly packed is the data within a group, the more likely the values are to be significant.

[top]

Establishing Cause.

Okay, so the research has established a link between a particular factor and a result. Does this mean that factor caused the result? Not necessarily. The correlation coefficient describes the relationship of the sets of data to each other, whether an increase in the values in one set is associated with a change in the other set.

Now, just because the correlation coefficient shows a strong relationship it doesn't mean that one variable caused the other variable. For instance, there is a strong correlation between one's age in years and the value of inflation. Did the fact that you had a birthday effect the rate of inflation? No, of course not. That's why separate studies are done to establish a correlation, then to discover if there is a causal relationship.

Confound It!.

Now that you have all this faith in the protocols used by researchers to ensure accurate results, please excuse me while I deflate your bubble somewhat. These are the things that drive researchers mad because they can mess up a perfectly good study:

These terms are used to describe factors that can invalidate and alter the results of a study. There may be errors in the way subjects were assigned to the groups, or differences in how the protocol was carried out by various participants (making comparison impossible), or factors may have come up in the course of the study which were not considered when the study was designed.

Say you're studying those germinating seeds again, and someone comes along and turns off a light when it was supposed to be on. That's called a confounding factor--something messed with the variable. How do you tell what the results mean if you don't even know what you started with?

Then there is selection bias--when the group assignments were not random or aren't typical of the population that the findings are applied to. For example, a study that came to the conclusion that gestational diabetes was very dangerous to babies because they found more stillbirths in women with the condition. But there was a selection bias; the women were selected for the study because they had previously suffered a stillbirth and were at higher risk.

Two other factors that may effect results are:

Last but not least is reporting bias. If you conducted a study that totally contradicted what you an your corportate sponsor expected to find, would you report the findings? Some people don't.


Well, that's my primer. I hope it helps. I'm a total number klutz myself and I figure if I can understand these papers, then anyone can! Normally I'd have a list of links for you to follow to learn more, but my extensive searching found not one site! If you come across one, I'd sure appreciate it if you'd let me know at rugratz3@geocities.com.

Bibliography.


[Home] | [New Stuff] | [Birth] | [Bestfeeding] | [Good Links]

Please check my home page for my disclaimer.
Questions and comments about the site or services? Contact me at: rugratz3@geocities.com.
Regardless of how you got here, or whose frames you may be viewing this through; this page is © Beth E. Johnson 1997, 1998, 1999. Downloading or printout for personal use permitted, all other rights reserved.
Created May 4, 1997. Last revised December 8, 1998.