Thursday, January 8, 2009

Keeping Abreast of Pseudoscience Part II

As I promised in the previous one, here is the second of a perfect pair of beautiful, shapely blog posts. Earlier I commented on the red flags of pseudoscience waving in the ad for Easy Curves breast enhancing exercise equipment, but mentioned that research methods of the supposed study or studies presented in the commercial could make up it's own post.

Let's assume that Easy Curves arrived at their "facts and figures" that in just 5 minutes a day increased the average bust line from 36.4 inches to 37.2 inches as well as increased firmness by 30% in 30 days," through a formal study. What are some points to consider in assessing the validity of the study?

Number of Subjects
What was the sample size in the study? The the larger the number of subjects (N), the greater the chance that a difference that is found is meaningful. N also influences whether a result is statistically significant, something else we don't know about the study.

Study design
A repeated measures design looks at the same subjects before and after the intervention (in this case use of Easy Curves for 30 days). It may or may not have a control group who is also tested before and after the intervention. Compared to a cross-sectional design, that is, finding a group of people who have used Easy Curves for 30 days vs. a group of people who have not and then comparing them, a repeated measures design helps to reduce the extra variance that occurs when comparing two groups made from two different sets of people.

The commercial did not mention any comparison groups. The study may have done a repeated measures design with only one group but an even stronger design would have also used a control group. This is because there are possible reasons firmness or breast size may have increased over time other than use of the Easy Curves. For example, the tendency of all Americans to gain weight over time might have been the reason for increased size. Perhaps people more likely to do exercises to increase their chest muscles would be more likely to volunteer for a study advertised as looking at methods to increase breast size and firmness. In that case, the additional exercises these people did may have been the cause and not the Easy Curves.

If two groups were compared, one of people told to use Easy Curves and the other told not to, were they assigned to these two groups randomly? Although unlikely, if people were not randomized, the more athletic, healthy people who eat well may choose to be in the Easy Curves group while the people who don't like to exercise may choose be in the control group.

Who measured the firmness and size? If it was the same person for all participants and both before and after, did their measuring techniques have test-restst reliability? I.e, can the testing method reliably produce the same result multiple times if used by the same person? (Um, who gets to rate the firmness?!)

If multiple testers were used to measure firmness and size, was there inter-rater reliability? That is the ability of a test to be used by different people and produce the same result. This can of course be influenced by how well trained the raters are, but some tests will inherently have greater inter-rater reliability than others.


Was the person who did the measurements of firmness and size blinded? If the rater is blinded, he or she does not know if the person they are measuring is receiving the intervention or not. Sometimes knowing what group the subject is in can influence the rater.

These are some basic points to consider when confronted with the results of a scientific study; by no means is "the study shows" or "statistical significance" the end of the story. In the case of Easy Curves we just don't have enough information to judge the validity the studies that may lend support for Easy Curves. If only the information we were given were a little more "filled out."

Please comment with any corrections or additions to this post. This is solely based from my college applied stats courses I took over 10 years ago. :)

No comments: