AAI_2025_Capstone_Chronicles_Combined

For the small scale test (results in Figure 5.3), TurbaNet maintains efficient parallelism up to approximately 40 networks before reverting to near-linear scaling. Again, it is worth pointing out that while it does indeed begin to scale, it scales slower than PyTorch (~17x slower).

Figure 5.3 CPU Performance Comparison for Small Networks

5.1.2.2) Large Network

For large network sizes (results shown in Figure 5.4), the performance begins to degrade rapidly and is outclassed by PyTorch which makes sense because PyTorch was built with considerations for large networks. Here, TurbaNet shows a linear increase in time as the swarm size grows from the very start and in fact shows a steeper increase than PyTorch which is the opposite of what was seen in previous experiments. Clearly, the library is coming up against the bounds of the hardware and is not able to

139

Made with FlippingBook - Share PDF online