AAI_2025_Capstone_Chronicles_Combined

‭ResolveAI‬

‭The dense layers use ReLU in the intermediary layer to introduce non-linearity, mitigate‬ ‭the vanishing gradient problem and enhance feature learning. The final dense layer applies‬ ‭sigmoid to transform the output into a probability score, which makes it a great choice for‬ ‭binary classification by distinguishing between low priority and medium/high priority tickets.‬ ‭The hyperparameter tuning process for binary classifier employs a Random Search‬ ‭strategy using a custom HyperModel class. In this setup, several key hyperparameters are‬ ‭optimized, including the embedding dimension (ranging from 32 to 128), the LSTM units (16 to‬ ‭128), the dropout rate (0.2 to 0.6), and the number of units in the dense layer (64 to 256). The‬ ‭tuner runs a specified number of trials (10 models) with each configuration being executed 3‬ ‭times to account for variability in training performance. The goal is to maximize validation‬ ‭accuracy by automatically exploring different combinations of these parameters. Once the tuning‬ ‭is complete, the best-performing model is selected, summarized, and saved for further evaluation‬ ‭and deployment.‬ ‭The model training procedure follows a systematic approach beginning with robust text‬ ‭preprocessing and feature engineering. The text is first tokenized by collecting all words from‬ ‭the “combined_request” column and counting their occurrences, which allows rare words (those‬ ‭appearing less than five times) to be filtered out. Based on these counts, the vocabulary size is‬ ‭determined and capped at a maximum of 20,000 words, ensuring that only frequently used words‬ ‭are included. The maximum sequence length is then set by analyzing the distribution of text‬ ‭lengths and choosing the 95th percentile, which helps avoid the influence of extreme outliers.‬ ‭Model Training Procedure‬

‭16‬

64

Made with FlippingBook - Share PDF online