搜救设备器械
02-报道称,乌克兰正在疏散受伤士兵,将他们装上大型无人机——这可能是第一次战场_1_1_translate
01-Are ‘walking blood banks’ coming to a field hospital near you?
01-“步行血库”会在你附近的野战医院出现吗_1_1_translate
02-Ukraine is evacuating wounded soldiers by loading them onto large drones, in what is likely a battlefield first, report says
03-基于北斗+装甲救护车的地面战场伤员精准搜救应用研究_1_9_translate
03-The Application Study on Accurately Search&Rescue of the Wounded on the land battlefield base on “Beidou + Armored ambulance”
05-基于战场创伤的快速加压止血给药及微系统设计_1_10_translate
09-智能传感灯在战场救援中的应用_1_2_translate
04-基于图像的外科伤口感染识别的机器学习方法_1_10_translate
05-Rapid Pressure Hemostatic Drug Delivery and Microsystem Design Based on Battlefield Trauma
08-CSAR-- 战斗搜索与救援
09-Application of the intelligent sensor lamp in battlefield rescue
04-Machine Learning Approaches for the Image-Based Identification of Surgical Wound Infections Scoping Review
10-Successful treatment of endotracheal Critical Care Open Access intubation-related lip pressure injury using a self-developed fixation device
10-自制固定装置成功治疗气管内危重重症开放通道插管相关唇压损伤_1_3_translate
06-Automated Prediction of Photographic Wound Assessment Tool in Chronic Wound Images
07-战斗搜索与救援维持和现代化努力
06-慢性伤口图像中摄影伤口评估工具的自动预测_1_10_translate
入侵者系统 (RA-1)-冲压空气战术降落伞系统
军用急救箱:历史、演变和必备物品
用于伤亡疏散的无人机系统需要做什么
-
+
首页
06-Automated Prediction of Photographic Wound Assessment Tool in Chronic Wound Images
<p>Journal of Medical Systems (2024) 48:14 <a href="https://doi.org/10.1007/s10916-023-02029-9">https://doi.org/10.1007/s10916-023-02029-9</a></p><p><img src="/media/202408//1724856372.794302.png" /><img src="/media/202408//1724856372.822164.png" /></p><p><strong>ORIGINAL PAPER</strong></p><p><strong>Automated Prediction of Photographic Wound Assessment Tool in Chronic Wound Images</strong></p><p><strong>Nico Curti1,2 · Yuri Merli3,4 · Corrado Zengarini4 · Michela Starace3,4 · Luca Rapparini4 · Emanuela Marcelli4,5</strong></p><p><strong>·</strong></p><p><strong>Gianluca Carlini2 · Daniele Buschi4 · Gastone C. Castellani4 · Bianca Maria Piraccini3,4 · Tommaso Bianchi6</strong></p><p><strong>·</strong></p><p><strong>Enrico Giampieri4</strong></p><p>Received: 12 August 2023 / Accepted: 22 December 2023 © The Author(s) 2024</p><p><strong>Abstract</strong></p><p>Many automated approaches have been proposed in literature to quantify clinically relevant wound features based on image processing analysis, aiming at removing human subjectivity and accelerate clinical practice. In this work we present a fully automated image processing pipeline leveraging deep learning and a large wound segmentation dataset to perform wound detection and following prediction of the Photographic Wound Assessment Tool (PWAT), automatizing the clinical judgement of the adequate wound healing. Starting from images acquired by smartphone cameras, a series of textural and morphological features are extracted from the wound areas, aiming to mimic the typical clinical considerations for wound assessment. The resulting extracted features can be easily interpreted by the clinician and allow a quantitative estimation of the PWAT scores. The features extracted from the region-of-interests detected by our pre-trained neural network model correctly predict the PWAT scale values with a Spearman's correlation coefficient of 0.85 on a set of unseen images. The obtained results agree with the current state-of-the-art and provide a benchmark for future artificial intelligence applications in this research field.</p><p><strong>Keywords </strong>PWAT · Image analysis · Wound healing · Computer vision · Clinical decision support system</p><table><tr><td></td><td></td><td></td></tr><tr><td><p>Nico Curti and Yuri Merli both authors contributed equally to this</p></td><td></td><td></td></tr><tr><td><p>work.</p></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr><tr><td><p><img src="/media/202408//1724856372.858588.png" /> Corrado Zengarini</p></td><td></td><td><p>Tommaso Bianchi</p></td></tr><tr><td><p>corrado.zengarini@studio.unibo.it</p></td><td></td><td><p>tommaso.b.bianchi@gmail.com</p></td></tr><tr><td><p>Nico Curti</p></td><td></td><td><p>Enrico Giampieri</p></td></tr><tr><td><p>nico.curti2@unibo.it</p></td><td></td><td><p>enrico.giampieri@unibo.it</p></td></tr><tr><td><p>Yuri Merli</p><p>merliyuri@gmail.com</p></td><td><p>1</p></td><td><p>Department of Physics and Astronomy, University of Bologna, 40127 Bologna, Italy</p></td></tr><tr><td><p>Michela Starace</p><p>michela.starace2@unibo.it</p></td><td><p>2</p></td><td><p>Data Science and Bioinformatics Laboratory, IRCCS Institute of Neurological Sciences of Bologna, 40139 Bologna, Italy</p></td></tr><tr><td><p>Luca Rapparini</p><p>luca.rapparini2@studio.unibo.it</p></td><td><p>3</p></td><td><p>Dermatology Unit, IRCCS Azienda</p><p>Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy</p></td></tr><tr><td><p>Emanuela Marcelli</p><p>emanuela.marcelli@unibo.it</p></td><td><p>4</p></td><td><p>Department of Medical and Surgical Sciences, University of Bologna, 40138 Bologna, Italy</p></td></tr><tr><td><p>Gianluca Carlini</p><p>gianluca.carlini3@unibo.it</p></td><td><p>5</p></td><td><p>eDIMESLab, Department of Medical and Surgical Sciences, University of Bologna, 40138 Bologna, Italy</p></td></tr><tr><td><p>Daniele Buschi</p><p>daniele.buschi2@unibo.it</p></td><td><p>6</p></td><td><p>Native Medica s.r.l., Native Medica, 40138 Bologna, Italy</p></td></tr></table><p>Gastone C. Castellani</p><p>gastone.castellani@unibo.it</p><p>published online:16January2024 <img src="/media/202408//1724856372.9052422.png" /> springer</p><p><strong>Introduction</strong></p><p>Due to the average population age increase, more dermatolo- gist specialists are involved in wound management [<a href="#bookmark1">1</a>]. Wound healing is a complex process, and optimal wound assessment is essential for their management; choosing the most appro- priate therapeutic approach can reduce healing times, and thus alleviate the healthcare system's economic burden <a href="#bookmark2">[2</a>]. An incorrect wound assessment model can lead to prolonged wound healing [<a href="#bookmark3">3</a>] and decrease patient compliance. The cor- rect classification of acute and chronic ulcers is essential both at diagnosis and in follow-up. A growing number of centers resort to archiving clinical images with methodical and instru- mental-assisted continuous monitoring to ascertain whether the healing process is proceeding correctly or not and then to determine prognosis and correct treatment [<a href="#bookmark4">4</a>].</p><p>The entire clinical evaluation process relies on the experience and subjectivity of the clinicians, introduc- ing a not negligible inter- and intra- operator variability [<a href="#bookmark5">5</a>]. The introduction of wound assessment tools aims to reduce these effects, providing a series of standardized criteria for the quantitative description of the wound status and response to the treatments. One of the most popular in the dermatological practice is the Bates-Jensen Wound Assessment Tool (BWAT) [<a href="#bookmark6">6</a>], which consists of 13 items that assess wound size, depth, edges, undermin- ing, necrotic tissue type, amount of necrotic, granulation and epithelialization tissue, exudate type and amount, sur- rounding skin color, edema, and induration. The items are represented as Likert scales with values ranging from 1 to 5, associated with the unhealthiest attribute of each of them. The use of the BWAT requires the evaluation of the wound online, i.e., during clinical practice, since many of the items can be quantified only by manual operations on the lesion area. For this reason, automated solutions for the quantification of this score are not applicable and it is impossible its posterior editing or adjustment. To address these issues, the Photographic Wound Assessment Tool (PWAT) was introduced in 2000 [<a href="#bookmark7">7</a>] . The PWAT score, indeed, aims to quantify the wound status starting from photos acquired during the clinical practice and involving item-scores inferable directly by the picture. The PWAT includes only a subset of the full list of items described by the BWAT, but it has already proved its effectiveness and robustness for clinical applications [<a href="#bookmark6">6</a>, <a href="#bookmark7">7</a>].</p><p>Despite the introduction of standardized assessment tools, the intrinsic subjectivity of the clinicians in the grading process continues to play a key role. The Likert format of the scale items, indeed, poses some constraints in the evaluation, but it forces the quantification of wound features which can be determined only by human inter- vention. The possibility to obtain a completely objective</p><p>estimation of wound status can be addressed only by intro- ducing an agnostic mechanical component guided by the ever-growing artificial intelligence solutions. The appli- cation of artificial intelligence models to medical image analysis already showed remarkable results [<a href="#bookmark8">8</a>–<a href="#bookmark9">11</a>], prov- ing its effectiveness in guiding and facilitating clinical practice [<a href="#bookmark10">12</a>, <a href="#bookmark11">13</a>]. According to the forementioned wound assessment tools, automated solutions for their estimations have been already proposed in literature [<a href="#bookmark8">8</a>, <a href="#bookmark12">14</a>, <a href="#bookmark13">15</a>], pro- viding hints about their mathematical formalization but without a detailed analysis of the related features . The current trend of the medical image analysis, is based on the use of deep learning models for the prediction of the clinical outcomes, making harder the understanding of the relevant clinical features. Also in the context of the</p><p>PWAT prediction, several approaches have already been proposed in literature, but only based on neural network models [<a href="#bookmark12">14</a>, <a href="#bookmark14">16</a>, <a href="#bookmark15">17</a>]. In our previous work [<a href="#bookmark16">18</a>]<em>, </em>we trained a deep learning model to perform semantic segmentation of wound region-of-interests (ROIs) from digital images. Here, we extend the model to automatically predict the PWAT scores from the identified wound areas.</p><p>The PWAT includes items belonging to both the wound and peri-wound areas, so we adapted our model predictions to obtain both these ROIs, we then proposed a novel set of textural and morphological features mimicking the clini- cian’s manual evaluation. According to these principia, all the proposed features are strictly connected to the wound appearance and completely human interpretable, guarantee- ing their possible application during the clinical practice. We finally use this set of features to feed a penalized regression model for the prediction of the PWAT scale value, testing the effectiveness and robustness of our model on an independent subset of images. To the best knowledge of the authors, our work represents the first attempt to automatically predict the PWAT score on smartphone images, using a combination of standard and radiomic image features.</p><p><strong>Materials and methods</strong></p><p><strong>Patientselection</strong></p><p>In this work we analyzed the images belonging to the <em>Deep- skin </em>dataset [<a href="#bookmark16">18</a>]. The images were acquired using smart-</p><p>phone cameras during routine dermatological examinations by the Dermatology Unit at IRCCS Sant'Orsola-Malpighi University Hospital of Bologna. The images were retrieved from charts of subjects who gave their voluntary consent to research. The Local Ethics Committee approved the study and carried it out in accordance with the Declaration of Hel- sinki. The data acquisition protocol was approved by the</p><p><img src="/media/202408//1724856372.96109.png" /> <img src="/media/202408//1724856372.968301.png" /></p><p>Local Ethics Committee (protocol n° 4342/2020 approved on 10/12/2020) according to the Helsinki Declaration.</p><p>We collected 474 patients over two years (from March 2019 to September 2021) at the center with 1564 wound images. A smartphone digital camera (Dual Sony IMX 286 12MP sensors with 1.25 µm pixel size, 27 mm equivalent focal length, F2.2 aperture, Laser-assisted AF, DNG Raw capture) acquired the raw images under uncontrolled illumi- nation conditions, various backgrounds, and image exposi- tions for clinical usage. The involved patients belonged to a heterogeneous population, including samples with ulcers at different healing stages and anatomical positions.</p><p>In this work, we used a subset of data extracted from the <em>Deepskin </em>dataset, composed of 612 images. This subset includes 324 males (52.9%) and 288 females (47.1%), with an average age of 77 ± 17 and 71 ± 17, respectively. There- fore, the involved population was balanced according to sex and biased towards higher age, as expected in any dermato- logical wound dataset.</p><p>The heterogeneity of the population in terms of wound severity was preserved also in the considered subset. The corresponding PWAT distribution, indeed, ranges from a minimum of 2 to a maximum of 24 with an average <a id="bookmark17"></a>of 15 ± 3. Also in this case, the bias related to relatively</p><p>higher value of PWAT is considered acceptable in relation to the clinical problem and intrinsically due to the neces- sary presence of wound in each image.</p><p><strong>Clinical scoring of images</strong></p><p>Two trained clinicians evaluated the 612 images indepen-</p><p>dently. The clinicians scored each image according to the PWAT grading scale. For a robust estimation of the PWAT score, the quantification of the related sub-items was per- formed during the image acquisition (online evaluation), i.e., monitoring the actual state of the wound. We chose the PWAT scale since it is a standard reference for wound assessment in clinical practice, and its automation can eas- ily encourage the clinicians' community to use our method.</p><p>All the clinicians scored the wounds in the same physical space, with the same source of illumination and without time limits. Each wound evaluation was reviewed according to the photo acquired during the clinical practice (offline evaluation), discarding all doubtful cases. During the offline evaluation, the images were displayed using a computer monitor (HP Z27 UHD 4 K, 27") with 3840 × 2160 resolution. The same screen color and brightness were used for the clinicians' evaluation.</p><p><img src="/media/202408//1724856372.9859872.png" /></p><p><strong>Fig. 1 </strong>Schematic representation of the pipeline. (Step 1) The image is acquired by the smartphone (<em>Deepskin </em>dataset) during clinical prac- tice. (Step 2) Two expert clinicians performed the manual annotation of the PWAT score associated to each wound, considering the status of the lesion and peri-lesion areas. (Step 3) The neural network model trained on the <em>Deepskin </em>dataset performs the automated segmenta- tion of the wound area. Focusing on the wound and peri-wound areas (obtained by image processing analyses), a set of features for the</p><p>quantifcation of textures and morphology of the lesion are extracted. (Step 4) A regression model based on the features extracted from the images is tuned for the automated prediction of the PWAT scores. While Step 1 and Step 2 requires the human intervention, by defni- tion, the second half of the pipeline automatically performed the anal- ysis. We would like to stress that the frst two steps are mandatory for the training of the automated solution but are discarded during real clinical applications</p><p><img src="/media/202408//1724856372.9980931.png" /> <img src="/media/202408//1724856373.0181649.png" /></p><p><strong>Image processing pipeline</strong></p><p>The proposed image processing pipeline is composed of a series of independent and fully automated steps (ref. Fig. <a href="#bookmark17">1</a>):</p><p><strong>Step 1. </strong>Image acquisition using smartphone camera dur- ing clinical practice.</p><p><strong>Step 2. </strong>Manual annotation of the wound status according to the PWAT scores by the two expert clinicians, pro- viding the ground truth required for the training of the automated model.</p><p><strong>Step 3. </strong>Automated identification of the wound and peri- wound areas using the neural network model trained on the <em>Deepskin </em>dataset; extraction of human interpretable features for the quantification of the PWAT items and wound status.</p><p><strong>Step 4. </strong>Prediction of the PWAT score via weighted com- bination of the identified features.</p><p>The first step of processing involves segmenting the wound area from the background. For the automated segmentation of the images,we used our previously published convolutional neu- ral network model: the details about the model implementation and its performances on the <em>Deepskin </em>dataset are discussed in our previous work [<a href="#bookmark16">18</a>]. The efficiency of wound segmentation is crucial for identifying the regions of interest on which perform the subsequent feature extraction.The segmentation masks gen- erated by our neural network model involve only the wound bed areas,while several PWAT sub-items concern scores describing the peri-wound boundaries.To overcome this issue,we extended each wound mask using a combination of morphological opera- tors, extracting a second mask related to the only peri-wound areas (ref.<a href="#bookmark18">Wound segmentation</a> section).</p><p>In the second step, we performed a features extraction from the areas identified by the segmentation model and the peri-wound masks. We extracted a set of standard image features based on different color spaces (RBG and HSV),</p><p>redness measurements based on quantities already proposed in literature [<a href="#bookmark19">19</a>, <a href="#bookmark20">20</a>], and the Haralick's textural features [<a href="#bookmark21">21</a>] for the quantitative description of wound morphology (ref. <a href="#bookmark22">Wound features</a> section for details about them).</p><p>In step three, the extracted set of features was used to feed a penalized regression model for the prediction of the final PWAT scale value.</p><p><strong>Calculation</strong></p><p><a id="bookmark18"></a><strong>Wound segmentation</strong></p><p>The definition of the wound area gives the principal limit of the <em>Deepskin </em>dataset. Since there is not a standardized set of criteria for the wound area definition, its reliability is left to the clinical needs. In our previous work, we trained a convolutional neural network model to segment the regions involving only the wound bed areas. In contrast, the Peri-ulcer Skin Viability and Edges items for the PWAT estimation involve the description of the peri-wound area, which is excluded by our segmentation mask.</p><p>In this work, we implemented a second step of automated image processing for the identification of the peri-wound areas, starting from the segmentation masks generated by our model. Using a combination of erosion and morphologi- cal dilation operators, keeping fixed the size of the structur- ing element (kernel) involved, we extracted for each image the associated peri-wound mask, i.e.</p><p><em>Mperi lesion </em>= (<em>Mlesion ⊕</em> <em>k</em>) (<em>Mlesion ⊖</em> <em>k</em>)</p><p>where ⊕ and ⊖ denote the dilation and erosion operators between the wound mask (M) and the kernel k, respectively. We used an ellipse shape for the kernel, with a dimension of 3 × 3. An example of the resulting image processing is shown in Fig. <a href="#bookmark23">2</a>.</p><p><img src="/media/202408//1724856373.1045609.jpeg" /></p><p><strong>Fig. 2 </strong>Example of segmentation masks used for wound identifcation. <strong>a </strong>Raw image extracted from <em>Deepskin </em>dataset. <strong>b </strong>Wound segmenta- tion mask generated by automated neural network model. <strong>c </strong>Peri-</p><p>wound segmentation mask obtained tors on wound mask</p><p>applying</p><p>morphological opera-</p><p><a id="bookmark23"></a><img src="/media/202408//1724856373.323092.png" /> <img src="/media/202408//1724856373.440183.png" /></p><p><a id="bookmark22"></a><strong>Wound features</strong></p><p>The quantification of the items related to the PWAT estima- tion involves both the wound and peri-wound areas. Since only 1/8 of PWAT sub-items involves the peri-wound area, we independently performed the features extraction on both the ROIs. In this way, we aimed to maximize the informa- tive power of the features extracted from the wound area, minimizing the putative confounders, but preserving the information related to the peri-wound area.</p><p><strong>Color features</strong></p><p>We extracted the average and standard deviation of <em>RGB </em>channels for each wound and peri-wound segmentation. This set of measures aims to quantify the appearance of the wound area in terms of redness and color heterogeneity.</p><p>We converted each masked image into the corresponding <em>HSV </em>color space. For each channel, we extracted the average and standard deviation values. The <em>HSV </em>color space is more informative than the <em>RGB </em>one since it takes care of differ- ent light exposition (saturation). In this way, we monitored the various conditions in which the images were acquired.</p><p>Both these two sets of features aim to quantify the necrotic tissue components of the wounds. The necrotic tis- sue, indeed, could be modeled as a darker component in the wound/peri-wound area, which alters the average color of the lesion. The <em>Necrotic Tissue type </em>and the <em>Total Amount of Necrotic Tissue </em>involve 2/8 items in the PWAT estimation.</p><p><strong>Redness features</strong></p><p>The primary information on the healing stage of a wound can be obtained by monitoring its redness (erythema) com- pared to the surrounding area. Several redness measurements are proposed in literature [<a href="#bookmark24">22</a>], belonging to different medi- cal fields and applications. In this work, we extracted two measures of redness, validated in our previous work [<a href="#bookmark25">23</a>] on a different image processing topic.</p><p>The first measure was proposed by Park et al. [<a href="#bookmark20">20</a>], and involves a combination of the <em>RGB </em>channels, i.e.,</p><p><img src="/media/202408//1724856373.51873.png" /></p><p>where <em>R</em>, <em>G</em>, and <em>B </em>are the red, green, and blue channels of the masked image, respectively, the <em>n </em>value represents the number of pixels in the considered mask. This measure emphasizes the <em>R </em>intensity using a weighted combination of the three <em>RGB </em>channels.</p><p>The second measure was proposed by Amparo et al. [<a href="#bookmark19">19</a>], and involves a combination of the <em>HSV </em>channels, i.e.,</p><p><img src="/media/202408//1724856373.668006.png" /></p><p>where <em>H </em>and <em>S </em>represent the hue and saturation intensities of the masked image, respectively. This measure tends to be more robust against different image light expositions.</p><p>Both these features were extracted on the wound and peri- wound areas independently. Redness estimations could help to quantify the <em>Peri-ulcer Skin Viability</em>, <em>Granulation Tissue Type</em>, and <em>Necrotic Tissue Type</em>, which represent 3/8 items involved in the PWAT estimation.</p><p><strong>Morphological features</strong></p><p>We measured the morphological and textural characteristics of the wound and peri-wound areas by computing the 13 Haralick's features [<a href="#bookmark21">21</a>]. Haralick's features are becoming standard texture descriptors in multiple medical image anal- yses, especially in the Radiomic research field [<a href="#bookmark26">24</a>–<a href="#bookmark27">28</a>]. This set of features was evaluated on the grey-level co-occurrence matrix (GLCM) associated with the grayscale versions of the original images, starting from the areas identified by our segmentation models. We computed the 13 standard Haralick's features, given by energy, inertia, entropy, inverse difference moment, cluster shade, and cluster prominence. Using textural elements, we aimed to quantify information related to the <em>Granulation Tissue types </em>and <em>Amount of Gran- ulation Tissue</em>, which are 2/8 items of the total PWAT score.</p><p><strong>Regression pipeline</strong></p><p>We started the regression analysis by standardizing the distri- bution of the extracted features. Each distribution of features belongs to a different domain of values, and to combine them, we need to rescale all the values into a common range. We rescaled the distributions of features using their median values, normal- izing according to the 1st and 3rd quantiles,i.e., a robust scaling algorithm, minimizing the dependency from possible outliers.</p><p>Both medians and quantiles were estimated on the training set and then applied to the test set to avoid cross contamination.</p><p>Starting from the processed features, we used a penalized Lasso regression model [<a href="#bookmark28">29</a>] to predict the PWAT clinical scores. Lasso regression is a regularized linear regression variant with an additional penalization component in the cost function [<a href="#bookmark29">30</a>]. In our simulations, we used a penalization coefficient equal to 10-2 . We split the complete set of data</p><p>into train/test sets using a shuffled tenfold stratified cross- validation: in this way, we can ensure a balance between classes at each subdivision. The model was trained on a sub- set (90%) of data, and its predictions were evaluated on the remaining test set (10%), at each fold.</p><p><img src="/media/202408//1724856373.757909.png" /> <img src="/media/202408//1724856373.815583.png" /></p><p><img src="/media/202408//1724856373.844384.jpeg" /></p><p><strong>Fig. 3 </strong>Results of the penalized regression model for predicting the PWAT scale values developed starting from the extracted features. The correlation between the ground truth and the predicted values is estimated using Spearman's rank correlation coefcient (ref. plot leg- ends). <strong>a </strong>Results on a single cross-validation of the model. With the dashed line, we highlight the axes bisector corresponding to a per- fect prediction. The model tends to overestimate the low PWAT scale values due to the few samples characterized by this condition. We</p><p>remark that the predictions are performed on a data set independent of the training set. <strong>b </strong>Results obtained by the same pipeline on 100 diferent cross-validations. A tenfold cross-validation was applied in each iteration to estimate Spearman's rank correlation coefcients. <strong>c </strong>Top ranking features involved in the prediction of PWAT scores. The informative power of the features was estimated using the coefcients of the lasso regression model</p><p><a id="bookmark30"></a><strong>Results</strong></p><p>We analyzed a dataset of 612 images using our automated pipeline, producing the complete set of segmentation masks, and extracting the related features. We fed a Lasso regression model using the 54 obtained features (12 color features + 2 redness features + 13 Haralick's features for both wound and peri-wound masks), estimating the cor- relation between the clinical PWAT values (ground truths) and the predicted ones. We trained the regression model using a tenfold cross-validation; the best model found pre- dicts the correct PWAT scale values with a Spearman's rank correlation coefficient of 0.85 (ref. Fig. <a href="#bookmark30">3</a>a) and a corresponding p-value close to zero. According to the ten- fold cross validation, the correlation performances were evaluated using the test subset of the data at each fold, <a id="bookmark31"></a>combining the results to obtain the score presented in the</p><p>figure legend. An example of the prediction obtained on the test set is shown in Fig. <a href="#bookmark31">4</a>.</p><p>We reiterated the same pipeline for 100 different cross- validations to test the robustness of our model, i.e. repeat- ing the regression step 100 times with different train/test subdivision of the data. We re-trained a Lasso regression using tenfold cross-validation at each iteration, monitoring the model's sensitivity to different training set subdivi- sions. The resulting distribution of Spearman's rank cor- relation coefficient is shown in Fig. <a href="#bookmark30">3</a>b.</p><p>We evaluated the informative power of each feature independently, performing a second set of 100 hold- out (90/10) cross-validations using the proposed pipe- line, monitoring the coefficients of the Lasso regression model. The ranked distribution of the average coeffi- cients associated with the corresponding feature is shown in Fig. <a href="#bookmark30">3</a>c.</p><p><img src="/media/202408//1724856373.972399.jpeg" /></p><p><strong>Fig. 4 </strong>Example of the predictions obtained by the regression model on three test images. We report the assigned PWAT score and the predicted one for each image using our model. We highlighted the wound areas identifed by our automated segmentation model with the green lines</p><p><img src="/media/202408//1724856374.11482.png" /> <img src="/media/202408//1724856374.229789.png" /></p><p><strong>Discussion</strong></p><p>The automated segmentation model, combined with the refinement image processing step proposed in this work, allowed the extraction of quantitative information on both the wound and peri-wound areas. Starting from the defini- tion proposed in the literature about the items related to the PWAT score, we extracted a series of features to char- acterize the wound and peri-wound areas. Each proposed feature was designed to model a different aspect of the wound area and a related PWAT sub-item. In this work, we focused on the "global" estimation of the PWAT score, but the correlation between each feature and the theoretical PWAT sub-items will be analyzed in a future work.</p><p>The results obtained on the PWAT prediction highlight a statistical agreement between (a subset of) the features extracted from the wound area and the grading scores. The robustness of the predictions on a set of images sampled with a no-rigid acquisition protocol confirms its possible use in clinical practice as a viable decision support system for der- matologists. We would like to stress that the results proposed in this work were obtained by a rigid train-test subdivision of data, i.e., evaluating the model on a never-seen set of data. Moreover, the entire pipeline produces real-time prediction on standard hardware, making it suitable for standard clinical practice and a valid candidate for smartphone implementation.</p><p>The proposed penalized regression model combines the extracted features, finding the optimal weights, i.e., param- eters, to associate with each one. Beyond the resulting per- formances, interpreting the regression coefficients allows to rank the extracted features according to their informative power for the PWAT estimation. As expected, not all the features are equally informative, but only 15 provide infor- mation on the PWAT score. It is interesting to notice how the most informative features selected by our model involve textural measures of the peri-wound and wound areas in a fairly balanced contribution (ref. Fig. <a href="#bookmark30">3</a>c), followed by the values related to the exposition and contrast of the wound. The same measures are strictly related also to the human perception of the image and its colors. This result confirms the efficiency of a Radiomic approach in medical image evaluation and the possibility to apply analogous techniques also to photographic medical images. The importance of contrast-based features could be mainly imputed to the necrotic condition of the most severe lesions which lead to a heterogeneous spread of the image colors. It is also interesting to notice how the classical redness expected in a lesion status, quantified by the Park et al. score, plays quite a negligible role in the final prediction. This behavior could be due to a bias in our dataset related to an unbalanced rep- resentation of the lesions, corresponding to different sever- ity grades and color shades.</p><p>In our analysis, we intentionally discarded the wound area feature for the PWAT estimation; despite this informa- tion being included in the clinical practice and in the PWAT estimation, its automated computation requires a pre-deter- mined rigid standardization of image acquisition, which could disfavor its applicability to routine clinical examina- tions. The <em>Deepskin </em>dataset includes wounds belonging to several anatomical positions, with images acquired without strict standardization. Therefore, the correct estimation of the wound area is impossible without a term of comparison or a pre-determined reference. We are currently developing an ad hoc segmentation model to address this issue with- out losing the easy-to-use characteristics of the proposed method, which will be discussed in future work.</p><p>The main limit of our work could be imputed to the monocentric source of the data and to the intrinsic bias duced by the reduced patient heterogeneity of the Italian country. A deeper validation of our system could be achieved with the analysis of a large scale multi-center dataset, involv- ing patients with a wider heterogeneity.</p><p>A second limit of the study could be attributed to a bias in the considered PWAT scores and patients. In the analyzed dataset, the PWAT scores ranged from a minimum of 2 to a maximum of 24, lacking the scores from 25 to 32 with an unbalanced subdivision of the value classes. While this reflects real-life values of the Italian population, it could nevertheless represent a limitation for the training of our system.</p><p>A further bias could be related toalso be present regard- ing the general picture acquisition condition. Capturing the images with a wider range of devices different light condi- tions could improve the robustness of the proposed method, as well as the introduction of standardized image processing techniques as preprocessing step of our analysis [<a href="#bookmark32">31</a>, <a href="#bookmark33">32</a>].</p><p>All the limits identified in this manuscript will be faced in future works, which will manage to improve the image processing pipeline and enlarge the dataset with new records according to the clinical availability.</p><p><strong>Conclusions</strong></p><p>This work introduced a fully automated pipeline for pre- dicting the PWAT grading scale. We combined a previously published automated pipeline to analyze wound images with a feature extraction approach to quantify information related to the wound healing stage. We performed a robust machine learning analysis of the image features, providing a regression model to correctly predict the PWAT score with a Spearman's correlation coefficient of 0.85. Moreover the proposed regression model could provide PWAT predic- tions with a continuous range of values, i.e. floating-point scores. The possibility to describe the wound severity using</p><p><img src="/media/202408//1724856374.328146.png" /> <img src="/media/202408//1724856374.364937.png" /></p><p>a finer-grained scale could provide a better patients stratifi- cation while preserving the same informative power as the original PWAT scale.</p><p>A penalized regression model allowed us to deeply investi- <a id="bookmark2"></a>gate the informative power of each feature extracted, provid- ing a ranking of them according to their relation to the PWAT score. We proved that Haralick's features play a statistically <a id="bookmark3"></a>significant role in the PWAT prediction. Furthermore, the fea- tures extracted on the peri-wound areas were as informative as <a id="bookmark4"></a>the wound ones. This confirms the importance in defining the correct shape and boundaries of the wound area for the correct automatization of the PWAT analysis.</p><p><a id="bookmark5"></a>The proposed pipeline is currently used in the Derma- tological Unit of IRCCS Sant'Orsola-Malpighi University <a id="bookmark6"></a>Hospital of Bologna in Italy, and it is still being perfected to overcome the current limitations of the method. These improvements will be the subject of future work.</p><p><a id="bookmark7"></a><a id="bookmark8"></a><strong>Author contributions </strong>N.C., Y.M., C.Z., E.G. and T.B., performed study</p><p>concept and design; N.C., G.C., D.B., and E.G performed development of methodology. All the authors contributing to the writing, review, and revision of the paper; Y.M., L.R., C.Z., and T.B. provided acquisition and interpretation of data; N.C. provided statistical analysis; B.M.P., E.M,M.S., G.C.C. and T.B. provided material support. All authors read and approved the final paper.</p><p><strong>Funding </strong>Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement. The authors received no specific funding for this work.</p><p><strong>Data availability </strong>The data used during the current study are available from the corresponding author on reasonable request. The pre-trained model for image segmentation is available in the repository, <em>Deep- </em><a id="bookmark9"></a><em>skin </em>(<a href="https://github.com/Nico-Curti/Deepskin">https://github.com/Nico-Curti/Deepskin</a>). The regression model used for the PWAT estimation is available in the repository, <em>Deepskin </em>(<a href="https://github.com/Nico-Curti/Deepskin">https://github.com/Nico-Curti/Deepskin</a>).</p><p><strong>Declarations</strong></p><p><a id="bookmark10"></a><a id="bookmark11"></a><a id="bookmark12"></a><strong>Competing interests </strong>The authors declare no competing interests.</p><p><strong>Open Access </strong>This article is licensed under a Creative Commons Attri- bution 4.0 International License, which permits use, sharing, adapta- tion, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a <a id="bookmark13"></a>copy of this licence, visit <a href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</a>.</p><p><a id="bookmark1"></a><strong>References</strong></p><p>1. Lindholm C, Searle R. Wound management for the 21st cen- tury: combining effectiveness and efficiency. Int Wound J. 2016;13(S2):5–15.</p><p>2. Olsson M, JärbrinkK, Divakar U, BajpaiR, Upton Z, Schmidtchen A, et al. The humanistic and economic burden of chronic wounds: A systematic review. Wound Repair Regen OffPubl Wound Heal Soc Eur Tissue Repair Soc. gennaio 2019;27(1):114–25.</p><p>3. Stremitzer S, Wild T, Hoelzenbein T. How precise is the evalua- tion of chronic wounds by health care professionals? Int Wound J. giugno 2007;4(2):156–61.</p><p>4. SibbaldRG, Elliott JA, Persaud-Jaimangal R, Goodman L, Armstrong DG, Harley C, et al. Wound Bed Preparation 2021. Adv Skin Wound Care. aprile 2021;34(4):183–95.</p><p>5. Haghpanah S, Bogie K, Wang X, Banks PG, Ho CH. Reliability of Electronic Versus Manual Wound Measurement Techniques. Arch Phys Med Rehabil. ottobre 2006;87(10):1396–402.</p><p>6. Bates-Jensen BM, McCreath HE, Harputlu D, Patlan A. Reli- ability of the Bates-Jensen wound assessment tool for pressure injury assessment: The pressure ulcer detection study. Wound Repair Regen OffPubl Wound Heal Soc Eur Tissue Repair Soc. luglio 2019;27(4):386–95.</p><p>7. Houghton PE, Kincaid CB, Campbell KE, Woodbury MG, Keast DH. Photographic assessment of the appearance of chronic pressure and leg ulcers. Ostomy Wound Manage. aprile 2000;46(4):20–6, 28–30.</p><p>8. Lustig M, Schwartz D, Bryant R, Gefen A. A machine learning algorithm for early detection of heel deep tissue injuries based on a daily history of sub-epidermal moisture measurements. Int Wound J. ottobre 2022;19(6):1339–48.</p><p>9. Wang C, Anisuzzaman DM, Williamson V, Dhar MK, Rostami B, NiezgodaJ, et al. Fully automatic wound segmentation with deep convolutional neural networks . Sci Rep. 14 dicembre 2020;10(1):21897.</p><p>10. Scebba G, Zhang J, Catanzaro S, Mihai C, Distler O, Berli M, et al. Detect-and-segment: A deep learning approach to auto- mate wound image segmentation. Inform Med Unlocked. 1 gen- naio 2022;29:100884.</p><p>11. Wang C, Yan X, Smith M, Kochhar K, Rubin M, Warren SM, et al. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2015. p. 2415–8.</p><p>12. Foltynski P, Ciechanowska A, Ladyzynski P. Wound surface area measurement methods. Biocybern Biomed Eng. 1 ottobre 2021;41(4):1454–65.</p><p>13. Chino DYT, Scabora LC, Cazzolato MT, Jorge AES, Traina- Jr C, Traina AJM. Segmenting skin ulcers and measuring the wound area using deep convolutional networks. Comput Meth- ods Programs Biomed. luglio 2020;191:105376.</p><p>14. Ghazawi FM, Netchiporouk E, Rahme E, Tsang M, Moreau L, Glassman S, et al. Comprehensive analysis of cutaneous T-cell lymphoma (CTCL) incidence and mortality in Canada reveals changing trends and geographic clustering for this malignancy. Cancer. 15 settembre 2017;123(18):3550–67.</p><p>15. Liu Z, Agu E, Pedersen P, Lindsay C, Tulu B, Strong D. Chronic Wound Image Augmentation and Assessment Using Semi- Supervised Progressive Multi-Granularity EfficientNet. IEEE Open J Eng Med Biol. 2023;1–17.</p><p><img src="/media/202408//1724856374.430807.png" /> <img src="/media/202408//1724856374.443567.png" /></p><p>16. Nguyen H, Agu E, Tulu B, Strong D, Mombini H, Pedersen P, et al. Machine learning models for synthesizing actionable care decisions on lower extremity wounds. Smart Health. 1 novem- bre 2020;18:100139.</p><p>17. Mombini H, Tulu B, Strong D, Agu E, Nguyen H, Lindsay C, et al. Design of a Machine Learning System for Prediction of Chronic Wound Management Decisions. In: Hofmann S, Müller O, Rossi M, curatori. Designing for Digital Transformation Co- Creating Services with Citizens and Industry. Cham: Springer International Publishing; 2020. p. 15–27. (Lecture Notes in Computer Science).</p><p>18. CurtiN, Merli Y, Zengarini C, GiampieriE, MerlottiA, Dall’Olio D, et al. Effectiveness of Semi-Supervised Active Learning in Automated Wound Image Segmentation. Int J Mol Sci. 2023; 24(1):706.</p><p>19. Amparo F, Wang H, Emami-Naeini P, Karimian P, Dana R. The Ocular Redness Index: A Novel Automated Method for Measur- ing Ocular Injection. Investig Opthalmology Vis Sci. 18 luglio 2013;54(7):4821.</p><p>20. Park IK, Chun YS, Kim KG, Yang HK, Hwang JM. New Clinical Grading Scales and Objective Measurement for Con- junctival Injection. Investig Opthalmology Vis Sci. 5 agosto 2013;54(8):5249.</p><p>21. Haralick RM, Shanmugam K, Dinstein I. Textural Features for Image Classification. IEEE Trans Syst Man Cybern. novembre 1973;SMC-3(6):610–21.</p><p>22. Anisuzzaman D m., Wang C, Rostami B, Gopalakrishnan S, NiezgodaJ, Yu Z. Image-Based Artificial Intelligence in Wound Assessment: A Systematic Review. Adv Wound Care. dicembre 2022;11(12):687–709.</p><p>23. CurtiN, GiampieriE, Guaraldi F, BernabeiF, CercenelliL, Castellani G, et al. A Fully Automated Pipeline for a Robust Conjunctival Hyperemia Estimation. Appl Sci. 26 marzo 2021;11(7):2978.</p><p>24. Carlini G, Curti N, Strolin S, Giampieri E, Sala C, Dall’Olio D, et al. Prediction of Overall Survival in Cervical Cancer Patients Using PET/CT Radiomic Features. Appl Sci. 2022; 12(12):5946.</p><p>25. Filitto G, Coppola F, Curti N, Giampieri E, Dall’Olio D, Merlotti A, et al. Automated Prediction of the Response to Neoadjuvant Chemoradiotherapy in Patients Affected by Rectal Cancer. Can- cers. 2022; 14(9):2231.</p><p>26. Lambin P, Leijenaar RTH, Deist TM, Peerlings J, de Jong EEC, van Timmeren J, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol. dicem- bre 2017;14(12):749–62.</p><p>27. Wang Y, Herrington DM. Machine intelligence enabled radiomics. Nat Mach Intell. ottobre 2021;3(10):838–9.</p><p>28. Huang EP, O’Connor JPB, McShane LM, Giger ML, Lambin P, Kinahan PE, et al. Criteria for the translation of radiom- ics into clinically useful tests. Nat Rev Clin Oncol. febbraio 2023;20(2):69–82.</p><p>29. Hilt DE, Seegrist DW, United States. Forest Service., Northeast- ern Forest Experiment Station (Radnor Pa). Ridge, a computer program for calculating ridge regression estimates. Vol. no.236. Upper Darby, Pa: Dept. of Agriculture, Forest Service, Northeast- ern Forest Experiment Station; 1977.</p><p>30. Hu JY, Wang Y, Tong XM, Yang T. When to consider logis- tic LASSO regression in multivariate analysis? Eur J Surg Oncol J Eur Soc Surg Oncol Br Assoc Surg Oncol. agosto 2021;47(8):2206.</p><p>31. Salvi M, Branciforti F, Veronese F, Zavattaro E, Tarantino V, Savoia P, et al. DermoCC-GAN: A new approach for standardiz- ing dermatological images using generative adversarial networks. Comput Methods Programs Biomed. ottobre 2022;225:107040.</p><p>32. Barata C, CelebiME, Marques JS. Improving dermoscopy image classification using color constancy. IEEE J Biomed Health Inform. maggio 2015;19(3):1146–52.</p><p><strong>Publisher's Note </strong>Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p><p><a id="bookmark14"></a><a id="bookmark15"></a><a id="bookmark16"></a><a id="bookmark19"></a><a id="bookmark20"></a><a id="bookmark21"></a><a id="bookmark24"></a><a id="bookmark25"></a><a id="bookmark26"></a><a id="bookmark27"></a><a id="bookmark28"></a><a id="bookmark29"></a><a id="bookmark32"></a><a id="bookmark33"></a><img src="/media/202408//1724856374.467555.png" /> <img src="/media/202408//1724856374.481191.png" /></p>
刘世财
2024年8月28日 22:46
转发文档
收藏文档
上一篇
下一篇
手机扫码
复制链接
手机扫一扫转发分享
复制链接
Markdown文件
HTML文件
PDF文档(打印)
分享
链接
类型
密码
更新密码