AAI_2025_Capstone_Chronicles_Combined

to obtain a more reliable estimate. For this experiment, loss and accuracy metrics are omitted to focus solely on training time performance. A performance comparison is performed in later sections on more common machine learning metrics.

5.1.1.1) Small Network

Figure 5.1 shows the analysis done on a network with 16 hidden nodes and only a single hidden layer. As TurbaNet is optimized for small-scale architectures, its performance in this case is exceptional. TurbaNet’s training time remains constant as swarm size increases whereas PyTorch clearly shows the expected trend of doubling networks, doubles the training time.

Figure 5.1 GPU Performance Comparison for Small Networks

5.1.1.2) Large Network

Figure 5.2 is the same experiment performed for a larger network, 256 hidden nodes and 2 hidden layers, which takes considerable amounts more memory and compute. While PyTorch continues to scale

137

Made with FlippingBook - Share PDF online