ADS Capstone Chronicles Revised
15
underscoring its precision in generating binary masks for localized building regions. In addition, the U-Net model's performance demonstrates its ability to handle complex datasets with varying building shapes and sizes. Its architecture's ability to integrate low-level and high-level feature maps guarantees that both fine details and overall context are captured successfully. This adaptability capability allows U-Net to maintain high performance even in this challenging scenario. Its results highlight the model's potential for this initiative on integration into automated workflows, enhancing the speed and accuracy of geospatial analyses in critical applications.
Although both models are effective for semantic segmentation, U-Net truly outperformed FCN in recall, IoU, and pixel accuracy, making it more adept for tasks requiring precise boundary delineation. Yet, FCN’s efficiency and high precision make it a valuable alternative for scenarios where computational simplicity is prioritized. The evaluation underscores the importance of selecting appropriate metrics based on task requirements, balancing accuracy, computational cost, and robustness to optimize performance in real-world applications. The performance of the CNN and MobileNet models were evaluated using several key metrics for building damage classification tasks. Accuracy, precision, recall, F1-score, and IoU are the most suitable for evaluating a model’s ability to classify and segment damage accurately, while robustness accuracy and robustness loss ensure that the models are not only performing well in ideal conditions but can also handle real-world variations. These performance metrics provide a comprehensive evaluation of the models in alignment with the objectives of the building damage classification task. The SimpleCNN model, with its hierarchical feature extraction through convolutional layers and max-pooling, was assessed using precision, recall, F1-score, IoU, pixel accuracy, and MSE. Although the SimpleCNN model performed reasonably well, its metrics showed lower accuracy and segmentation performance compared to the MobileNet model, reflecting its simpler architecture. MobileNet, optimized for efficiency in resource constrained environments, outperformed SimpleCNN and ResNet50 in key metrics such as precision (0.7041), recall (0.7034), and F1-score (0.7029). These metrics highlighted MobileNet’s strong ability to accurately classify building damage. Furthermore, as seen in Table 3, MobileNet has achieved the highest IoU (0.5461), 5.3 Damage Classification
Table 2
Localization Performance Metrics
Model
FCN
U-Net
Precision
0.9417
0.9122
Recall
0.8862
0.9561
F1-Score
0.9131
0.9336
IoU
0.8401
0.8755
Pixel Accuracy
0.9846
0.9876
MSE
0.0102
0.0094
True Positives
1296976 1399334
False Positives
80336
134716
False Negatives
166557
64199
True Negatives
14446915 14392535
Robustness Loss
0.7864
0.733
Robustness Accuracy
0.8794
0.8753
281
Made with FlippingBook - Online Brochure Maker