Not all clinical studies are created equal, an expert says, and even the most robust trials require careful interpretation.
Denver - Even the largest and most methodologically robust clinical studies have their limitations, and therefore must be interpreted carefully, an expert says.
Junko Takeshita, M.D., Ph.D.Meta-analyses maximize their statistical power by combining multiple randomized controlled trials (RCTs), says Junko Takeshita, M.D., Ph.D. She is a postdoctoral fellow and instructor in the University of Pennsylvania Perelman School of Medicine’s department of dermatology, Philadelphia.
There are, however, disadvantages of large reviews, including the heterogeneity of their populations and study methods, which makes interpretation of the results difficult, she says. Additionally, “Meta-analyses are only as good as the included studies.”
Regarding spironolactone as an acne treatment, she says, a Cochrane review (Brown J, Farquhar C, Lee O, et al. Cochrane Database Syst Rev. 2009;(2):CD000194) uncovered two RCTs but excluded one because it involved only 12 patients taking spironolactone and one on placebo (Goodfellow A, Alaghband-Zadeh J, Carter G, et al. Br J Dermatol. 1984;111(2):209-214). The remaining study, which included 29 patients and showed no difference versus placebo (Muhlemann MF, Carter GD, Cream JJ, Wise P. Br J Dermatol. 1986;115(2):227-232), constituted insufficient evidence to support spironolactone as an effective treatment for acne, Dr. Takeshita says.
RCTs, on the other hand, maximize internal validity, she says. Randomizing patients to one treatment or another (ideally with double blinding) maximizes investigator control and minimizes confounding factors, which often plague observational studies, she explains.
“So when you get the end results of an RCT, you can be pretty confident that they stem from the actual intervention,” Dr. Takeshita says.
RCTs’ constraints, however, include limited external validity, she notes. Because RCTs use specific inclusion and exclusion criteria, physicians can’t assume that their results apply to the entire patient population with a specific illness.
“RCTs are very resource-intensive and time-consuming,” Dr. Takeshita says. “And finally, RCTs are rarely powered to determine safety.”
In this regard, she says, RCTs are rarely sufficiently statistically powered to accurately gauge adverse events. Rather, she says it’s important to remember that most RCTs report side effects as secondary outcomes. Because RCTs are generally short, she adds, “It’s hard to determine the true risk, especially of rare side effects.”
NEXT: Overstating results
As for efficacy, one-third of all published RCTs are later contradicted or shown to have overstated their results (Ioannidis JP. JAMA. 2005;294(2):218-228).
Looking specifically at the question of whether imiquimod provides efficacy for molluscum contagiosum, Dr. Takeshita says it’s difficult to draw conclusions from the three published RCTs. For starters, a 112-patient study showed that 55 percent of imiquimod-treated patients were clear at the first follow-up visit (Hanna D, Hatami A, Powell J, et al. Pediatr Dermatol. 2006;23(6):574-579). This is the largest of the three published RCTs; however, Dr. Takeshita says, the fact that the study compared four different treatments reduces the overall power of the study.
Somewhat similarly, a study showing that imiquimod-treated patients were 4.6 times more likely to be clear one month post-treatment than placebo-treated patients actually had a confidence interval of 0.25 to 86.72 (Theos AU, Cummins R, Silverberg NB, Paller AS. Cutis. 2004;74(2):134-138, 141-142). Due to the wide confidence interval, “That’s not a reliable measure. So I wouldn’t put much stock into this study’s findings,” Dr. Takeshita says.
Finally, a 74-patient study showing 91.8 percent clearance for imiquimod versus 100 percent clearance for cryotherapy at week 16 did not reach statistical significance (Al-Mutairi N, Al-Doukhi A, Al-Farag S, Al-Haddad A. Pediatr Dermatol. 2010;27(4):388-394).
Publication bias also complicates the picture, Dr. Takeshita says. The original maker of imiquimod conducted two large RCTs (217 patients and 106 patients, respectively) at the Food and Drug Administration’s behest. Neither study showed any difference between imiquimod and placebo 18 weeks post-treatment, she says, and both went unpublished.
“These results were hidden, and it’s very hard to access these data,” Dr. Takeshita says.
Some sources of unpublished data include trial registries such as www.clinicaltrials.gov, regulatory databases (which provide information on request) and litigation records, all of which require some digging, she says.
Published studies show that 73 percent of patients experience pain with imiquimod treatment, and 76 percent experience erythema.
“So the bottom line is, imiquimod is not effective for molluscum contagiosum, and side effects are common,” Dr. Takeshita says. “You may want to think twice about using it for molluscum.”
Disclosures: Dr. Takeshita reports no relevant financial interests.