• General Dermatology
  • Eczema
  • Alopecia
  • Aesthetics
  • Vitiligo
  • COVID-19
  • Actinic Keratosis
  • Precision Medicine and Biologics
  • Rare Disease
  • Wound Care
  • Rosacea
  • Psoriasis
  • Psoriatic Arthritis
  • Atopic Dermatitis
  • Melasma
  • NP and PA
  • Skin Cancer
  • Hidradenitis Suppurativa
  • Drug Watch
  • Pigmentary Disorders
  • Acne
  • Pediatric Dermatology
  • Practice Management

Validity, impartiality of study results may boil down to blindedness

Article

A small study shows grounds for questioning the objectivity of supposedly unbiased clinical evaluators, its author says. It may be that being blinded is the only way to assure clear view.

Boston - In a recent study, supposedly unbiased laser experts' evaluation of clinical photos revealed that perhaps they did carry some preconceived notions about the technology purportedly under evaluation, the study's author says.

Whether dermatologists are aware of it or not, "We're all biased," says Thomas Rohrer, M.D., clinical associate professor of dermatology, Boston University Medical Center, and Mohs fellowship director, Skin Care Physicians of Chestnut Hill, Mass.

To evaluate the impact of these dynamics on clinical research, Dr. Rohrer and David Horne, M.D., conducted a study (unpublished) that involved giving the same high-quality digital clinical "before" and "after" photographs to four different groups of evaluators.

"The variable was whom we asked to evaluate the pictures," he explains.

Though the expert evaluators didn't know it at the time, Dr. Rohrer tells Dermatology Times, "That's what we were really looking at. They thought we were evaluating the device," a popular nonablative skin-tightening product whose results in eight patients researchers asked them to rate.

The first group of two evaluators "knew the device well and were, we felt, advocates for the device," he says.

Researchers told them which photos were taken before treatment versus after, Dr. Rohrer adds, and were hardly surprised that this group generally thought the "after" pictures looked much better. In particular, these evaluators said more than half of patients demonstrated at least 25 percent improvement, and two patients improved more than 50 percent.

"We sent the same images to another group of four evaluators who we felt were somewhat ambivalent toward the device," he says.

Researchers found that they weren't quite as impressed by treatment results as were the first group of experts, Dr. Rohrer adds. The consensus in the ambivalent group was that most patients improved, showing up to 25 percent improvement, though evaluators disagreed somewhat as to the degree of improvement.

Researchers also sent the study photos to a group of four evaluators they considered to be doubters of the treatment's value.

"When they scored the results," he says, "they found that there really wasn't much improvement in most patients."

Though they judged the majority of patients as having up to 25 percent improvement, evaluators disagreed sharply about the degree of improvement patients showed, Dr. Rohrer reports. They furthermore judged two patients clearly to be non-responders.

Finally, he says, "We sent the exact same photographs to the ambivalent evaluators three or four months later. This time, we didn't tell them which were the 'before' pictures and which were 'after.'"

In this analysis, he says evaluators found basically no difference between "before" and "after" photos, and, "If anything," Dr. Rohrer says, "many of the 'after' photos were judged as looking worse (in some instances by more than 50 percent) than the 'before' photos."

Therefore, he says, "Our conclusion from the study was that whom one asks to grade the pictures can make a big difference in one's results. And clearly, labeling the pictures 'before' and 'after' can be very influential."

Related Videos
© 2024 MJH Life Sciences

All rights reserved.