AAI_2025_Capstone_Chronicles_Combined

StyleGAN is a unique GAN implementation with a heavily modified generator capable of

creating images across multiple layers of detail. It uses a style-based generator architecture with

a mapping network that transforms a latent vector (z) into an intermediate latent space (w)

(Figure 5). The latent code is injected at each convolutional layer via Adaptive Instance

Normalization (AdaIN), helping decouple high-level attributes from stochastic variation (Karras

et al., 2021). StyleGAN also employs progressive growth, incrementally training higher

resolution image representations, which stabilizes training and enables the capture of complex

features (Brownlee, 2021).

Figure 5. StyleGAN Generator Model Architecture. Taken from “A Style-Based Generator Architecture for

Generative Adversarial Networks” by Kerras et al , 2019 In Proceedings of the IEE/CVF conference on

computer vision and pattern recognition

To fine-tune StyleGAN3, the StyleGAN repository was cloned into a Google Colab

environment to customize the training process. A random sample of 1,000 images from

both the NORMAL and PNEUMONIA training datasets was selected, resized to 128×128,

110

Made with FlippingBook - Share PDF online