AAI_2025_Capstone_Chronicles_Combined

Evaluating Deep Learning Model Convergence in Chess via Nash Equilibria

11 from 2024 has over 10 million games, the model was not trained on full pass through the dataset. A test set was created by sampling the bottom 64k positions from the dataset, these games were not used during training. Many efforts were made to improve the accuracy of the model. Initially, without engineering pseudo-legal move features, the network could not break 57% accuracy on the training data. The inclusion of material imbalance as a feature also did not seem to affect model performance, perhaps this is a feature that is easily inferred from the position tensor. Finally, transformer architectures were tried, but the training was so slow that it had to be abandoned for the sake of time. Dropout in this study is also fairly low at 25%, but I found that the training data was already so difficult to fit to, that increasing this regularization further netted extremely subpar results.

85

Made with FlippingBook - Share PDF online