UMMS Affiliation
Department of Radiology
Publication Date
2020-08-06
Document Type
Article
Disciplines
Artificial Intelligence and Robotics | Biomedical Engineering and Bioengineering | Health Information Technology | Radiology
Abstract
Smartphone wound image analysis has recently emerged as a viable way to assess healing progress and provide actionable feedback to patients and caregivers between hospital appointments. Segmentation is a key image analysis step, after which attributes of the wound segment (e.g. wound area and tissue composition) can be analyzed. The Associated Hierarchical Random Field (AHRF) formulates the image segmentation problem as a graph optimization problem. Handcrafted features are extracted, which are then classified using machine learning classifiers. More recently deep learning approaches have emerged and demonstrated superior performance for a wide range of image analysis tasks. FCN, U-Net and DeepLabV3 are Convolutional Neural Networks used for semantic segmentation. While in separate experiments each of these methods have shown promising results, no prior work has comprehensively and systematically compared the approaches on the same large wound image dataset, or more generally compared deep learning vs non-deep learning wound image segmentation approaches. In this paper, we compare the segmentation performance of AHRF and CNN approaches (FCN, U-Net, DeepLabV3) using various metrics including segmentation accuracy (dice score), inference time, amount of training data required and performance on diverse wound sizes and tissue types. Improvements possible using various image pre- and post-processing techniques are also explored. As access to adequate medical images/data is a common constraint, we explore the sensitivity of the approaches to the size of the wound dataset. We found that for small datasets ( < 300 images), AHRF is more accurate than U-Net but not as accurate as FCN and DeepLabV3. AHRF is also over 1000x slower. For larger datasets ( > 300 images), AHRF saturates quickly, and all CNN approaches (FCN, U-Net and DeepLabV3) are significantly more accurate than AHRF.
Keywords
Wound image analysis, semantic segmentation, chronic wounds, U-Net, FCN, DeepLabV3, Associative Hierarchical Random Fields, Convolutional Neural Network, Contrast Limited Adaptive Histogram Equalization
Rights and Permissions
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
DOI of Published Version
10.1109/access.2020.3014175
Source
Wagh A, Jain S, Mukherjee A, Agu E, Pedersen P, Strong D, Tulu B, Lindsay C, Liu Z. Semantic Segmentation of Smartphone Wound Images: Comparative Analysis of AHRF and CNN-Based Approaches. IEEE Access. 2020;8:181590-181604. doi: 10.1109/access.2020.3014175. Epub 2020 Aug 6. PMID: 33251080; PMCID: PMC7695230. Link to article on publisher's site
Journal/Book/Conference Title
IEEE access : practical innovations, open solutions
Related Resources
PubMed ID
33251080
Repository Citation
Wagh A, Jain S, Mukherjee A, Agu E, Pedersen P, Strong D, Tulu B, Lindsay C, Liu Z. (2020). Semantic Segmentation of Smartphone Wound Images: Comparative Analysis of AHRF and CNN-Based Approaches. Radiology Publications. https://doi.org/10.1109/access.2020.3014175. Retrieved from https://escholarship.umassmed.edu/radiology_pubs/574
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Included in
Artificial Intelligence and Robotics Commons, Biomedical Engineering and Bioengineering Commons, Health Information Technology Commons, Radiology Commons