AAI_2025_Capstone_Chronicles_Combined

models do not use adversaries to guide the generation process; instead, noise is

progressively added to training images, enabling the model to learn how to add or remove

noise to produce the desired outcome.

Diffusion models have gained popularity in image generation applications and are

used by platforms such as DALL-E and Bing Designer. In medical contexts, diffusion

models have shown promise in generating high-quality synthetic images that can pass as

real (Ali et al., 2023). However, they also tend to produce identifiable fakes with

unconvincing features, indicating that further refinement is needed. Additionally, diffusion

models generally require longer training times than GANs, as there is no adversary to

accelerate the learning process. 

Experimental Methods

This study applies multiple machine learning models for both image classification and

synthetic image generation. For classification, a custom Convolutional Neural Network (CNN)

was developed to classify chest X-ray images labeled as NORMAL or PNEUMONIA, alongside

experiments fine-tuning a pre-trained VGG model. For image generation, efforts included both a

custom GAN and a pre-built StyleGAN to produce synthetic images for data augmentation.

Based on prior research, these models were well-suited to address the problem. CNN

models demonstrate strong performance in image classification tasks and were a natural fit.

Fine-tuning a pre-trained VGG model offered the potential to boost accuracy and performance

metrics, as its prior training on image data reduces the burden on initial feature extraction. The

104

Made with FlippingBook - Share PDF online