AAI_2025_Capstone_Chronicles_Combined

Background Information

Automating pathology detection in chest X-rays is a well-established focus within medical AI, with both academic and commercial groups actively developing deep learning–based classifiers for this purpose. Many of these efforts have leveraged the NIH ChestX-ray14 dataset as a benchmark, applying convolutional neural networks (CNNs) such as ResNet, DenseNet, CheXNet, and more recently EfficientNet. These studies consistently demonstrate that CNN-based models, particularly when fine-tuned with domain-specific techniques, can match or exceed radiologist-level accuracy on more common thoracic pathologies (Wang et al., 2017; Rajpurkar et al., 2018; Kufel et al., 2023). CNNs are especially well-suited to this task because of their ability to hierarchically learn spatial features in medical images. Low-level layers extract general patterns like edges and textures, while deeper layers encode increasingly abstract features, such as the structure and density patterns seen in pathological lung tissue. When paired with transfer learning from models pretrained on ImageNet, CNNs can generalize effectively even when the training data has class imbalance or noisy labels, both of which are common in ChestX-ray14. To explore the design space, we implemented two distinct model families: a custom hybrid CNN that accepts both image and tabular data, and a transfer learning–based model using EfficientNet. The hybrid CNN allowed us to incorporate demographic information (such as patient age or gender), which some studies have shown to improve classification in edge cases (Baltruschat et al., 2019). The EfficientNet model, on the other hand, builds on recent advances in CNN scaling. Its compound scaling strategy optimizes depth, width, and resolution which allow it to outperform deeper architectures like ResNet while still maintaining faster inference

13

Made with FlippingBook - Share PDF online