AAI_2025_Capstone_Chronicles_Combined
MENTAL HEALTH RISK DETECTION USING ML
16
The Soft Voting Ensemble required no additional training. After training the Logistic Regression and Tabular Neural Network models separately, we used scikit-learn’s VotingClassifier with the voting='soft' option to average their predicted probabilities for each class (Pedregosa et al., 2011). This technique improves prediction stability by leveraging the complementary strengths of both models. Equal weights were applied to both classifiers. The full training configurations for each model are summarized in Table 4 .
Table 4 Training Settings Per Model
Model
Training Details
Batch Size : Mini-batch (via saga solver) Loss Function : Log-loss (cross-entropy) Metrics : Accuracy, F1-score, AUC Iterations : max_iter = 1000 (iterative optimization) Hyperparameters : Regularization (elastic net, l1_ratio=0.5, C=1.0), Polynomial Features (degree = 2) Solver : saga
Logistic Regression
Batch Size : 256 Epochs : 20 Loss Function : Sparse Categorical Cross-Entropy Metrics : Accuracy, F1-score, AUC Validation : Explicit validation using (X_test, y_test) Early Stopping : Patience = 3 on validation loss Framework : Keras/TensorFlow
Tabular Neural Network
Soft Voting Ensemble
No additional training beyond base models
216
Made with FlippingBook - Share PDF online