AAI_2025_Capstone_Chronicles_Combined

CNNs were selected over conventional techniques due to their ability to automatically

learn hierarchical features, enhance diagnostic accuracy, and offer versatility across medical

imaging modalities (Esteva et al., 2019). By automatically extracting relevant features from

images, CNNs reduce the need for manual feature engineering or domain-specific expertise,

such as radiologist input. Recognizing the simplicity of base CNN models, researchers have

explored more advanced architectures to further improve performance and accuracy.

In this project, CNN models process X-ray images through multiple layers to identify and

analyze patterns relevant to disease detection. The input layer resizes and normalizes raw X-rays

to match model specifications. Convolutional layers apply filters to detect spatial features,

starting with simple edges and progressing to complex structures such as lung anatomy and

pneumonia indicators. The ReLU activation function introduces non-linearity, enabling the

model to learn intricate patterns, while pooling layers reduce the feature map’s dimensions,

retaining critical information. Extracted features are then flattened and passed through fully

connected layers, refining the model’s understanding for classification. The output layer

produces probability scores for each class.

Generative Adversarial Networks (GAN) for Dataset Augmentation

Generative Adversarial Networks (GANs) are a relatively new form of AI model that

create synthetic data by learning the properties and characteristics of real data. GANs consist of

two models—the generator and the discriminator—that are independently trained and pitted

against each other in a zero-sum game. During training, the generator attempts to fool the

discriminator by producing increasingly realistic images, while the discriminator continuously

102

Made with FlippingBook - Share PDF online