AAI_2025_Capstone_Chronicles_Combined

was built with TensorFlow’s VGG16, where all layers were frozen except the final four.

Freezing preserved the base model’s learned patterns, allowing most of the learning to

occur in the task-specific final layers.

The final layers consisted of a flatten layer, a dense layer with ReLU activation, a

dropout layer, and a final dense layer. The flatten layer converted the VGG output into a

format suitable for further processing. The dense layer introduced non-linearity, while the

dropout layer reduced overfitting by preventing reliance on individual nodes. The final

dense layer used a sigmoid activation function for binary output, appropriate for the

classification task.

Following the success of the custom CNN, binary cross-entropy was maintained as

the loss function. Hyperparameter tuning tested variations in learning and dropout rates;

ultimately, a dropout rate of 0.5 and a learning rate of 0.0001 were selected for their

consistent accuracy during testing.

The same experimental procedures were used for training this model to maintain

consistency in final comparisons. The model was trained on real images, balanced real

images, and synthetic datasets generated by ACGAN and StyleGAN, with final tests

conducted using real images to simulate real-world deployment.

Generational Adversarial Network

GAN models were built from scratch and were compared with transfer learning from

StyleGAN which was hypothesized to create high quality synthetics with lower effort.

107

Made with FlippingBook - Share PDF online