Two new studies analyzes the competencies and limitations of teaching a neural network to recognize facial acne.
How close are we, in 2022, to the point where deep learning can tell if a patient has acne, score acne severity if applicable or help with self-monitoring? According to two recent studies, AI has potential in a variety of aspects of acne diagnosis and care, but their practical utility varies.1,2 Both studies build on previous research while attempting to counter shortcomings in prior study design. One study by Quattrini et al examined new algorithms and the use of much larger databases to train a model to recognize whether a patient has acne.1 Another study by Wen et al used an object detection model with computational neural networks to create an applet for self-monitoring.2
“The limitations of all the previous work are that of the scarcity in size of the dataset and the ability to ensure that (with so little data available) the network is really able to learn the task assigned to it,” wrote Quattrini, Boër, Leidi and Paydar.1 They turned to publicly available, rather than proprietary datasets, to train their model on tens of thousands of images that had been re-annotated, using semantic segmentation to differentiate between areas of interest and backgrounds and non-acne affected regions such as hair, eyes, lips and facial hair.
Interpretability has been another limiting factor in previous deep learning models. Wen et al turned to object detection to attempt to improve interpretability, or how easily humans can understand why the model arrived at the conclusions it did. “Object detection (also known as object localization) is the most straightforward and perhaps the most convincing method for acne analysis on facial images,” the authors wrote. “Once the acne lesions are located in the face, numerical scoring scheme could be applied to make assessments of acne severity. Object detection has achieved great success in many application areas, for instance in autonomous driving, but its potential in facial acne analysis seems to have not been fully exploited.”
Both studies’ authors acknowledged that even with state of the art methods, the available datasets presented some challenges. In some, only about 12% of images showed people with acne. In others, the image quality was too low to use or the subject was too far from the camera. However, despite these issues, both studies achieved levels of accuracy that their authors felt demonstrated potential.
As for where AI still needs most improvement, the authors agreed that it cannot currently match the skill set of human dermatologists due to the narrow focus of what these types of training models make possible. “There is still a long and complex way to go before we have an artificial intelligence-based instrument capable of analyzing dermatological pathologies at the same level as an experienced dermatologist,” Quattrini et al wrote. “Although this study shows that, with good data availability, a neural network can learn to recognize a specific pathology, it must be considered that many others would have to be integrated to have a complete assessment of the patient’s skin health. Last but not least, having the user take a selfie to be analyzed can introduce additional problems related to image quality (over- and under-exposure, reflections, …), all of which are avoided in a dermatological visit.”1 Wen et al point out that AI would need to be better able to distinguish among dermatological conditions. “Confusion with other facial skin conditions, like rosacea and eczema, and skin pigmentation is another issue that is not avoidable for developing better facial acne lesion detectors, or for developing other deep learning analysis systems for facial skin conditions,” they wrote.2