ADS Capstone Chronicles Revised
30
Table 6 Classification Model Analysis for Menu and Individual Food Recommendations
capture subtle patterns in the menu dataset, which has limited variability due to shared ingredients among menu items. For the individual food recommendations dataset, the results also favored XGBoost. Table 7 presents that XGBoost achieved an MSE of 0.044025, marginally lower than Linear Regression MSE of 0.044101. More notably, XGBoost produced a positive R² score (0.000067), while Linear Regression maintained a negative value (-0.001664). This suggests that XGBoost was better equipped to handle the greater variability in the individual food dataset, capturing the complexity of the nutritional content and patient scores more effectively. Overall, XGBoost’s consistent edge in both datasets underscores its adaptability and robustness, even when baseline models like Linear Regression are highly competitive.
5.3 Grid Search Optimization for Regression Models Grid search optimization was conducted to refine the top baseline regression models: Linear Regression and XGBoost Regressor. The optimization process used metrics like mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), and R-squared score to evaluate predictive accuracy and how well the models captured variability in the data. For the menu recommendations dataset, the XGBoost Regressor slightly outperformed Linear Regression. As seen in Table 7, XGBoost achieved an MSE of 0.027433 compared to Linear Regression MSE of 0.027434. While the improvement in MSE is marginal, XGBoost demonstrated a slight advantage in R² score. This indicates that XGBoost was better able to
232
Made with FlippingBook - Online Brochure Maker