AAI_2025_Capstone_Chronicles_Combined
Figure 3.3: Sample Images from MNIST Handwritten Digits Dataset
In terms of data pre-processing, this dataset also does not require many steps. The image is ingested from a CSV, checked for missing values, and then the pixel values are normalized to between 0-1. There are other steps that could be taken to improve the usage of the available data with augmentation like skewing the images or slightly rotating them which could greatly expand the available training data. In this case, this project is not overly concerned with optimizing the performance of the model, it is simply interested in the juxtaposition between the performance relative to PyTorch. Similarly to the test problem from before, the data must be duplicated for each of the networks being trained before sending it into the training function. Different data can be provided to each network to get different predictors after training, but for the purposes of testing in this project, identical data is sent to each network since the goal is to measure throughput rather than model performance.
4.) Experimental Methods
To evaluate the performance of TurbaNet, a series of experiments were conducted designed to measure both training efficiency and model accuracy in comparison to traditional PyTorch models. The experiments focused on two main objectives: recreating prior benchmarks to validate TurbaNet’s functionality and systematically comparing its computational performance under varying conditions.
133
Made with FlippingBook - Share PDF online