M.S. AAI Capstone Chronicles 2024
values, rolling averages, temporal features like time of day, and patient demographics. The architecture captures short-term glucose fluctuations and long-term trends, addressing the temporal nature of CGM data. The output presents continuous glucose value predictions, optimized for accuracy in both normal ranges and hypo-/hyperglycemic events. For the meal analysis component, Vision Transformer (ViT) models and the CNN-based YOLOv8 models were used. These include convolutional layers for feature extraction, followed by dense layers for regression tasks. The YOLOv8 architecture is pre-trained on the COCO8 dataset, with additional dense layers fine-tuned for carbohydrate prediction. The ViT divides food images into patches and applies self-attention mechanisms to extract global and local features. Dense layers are added for regression to predict macronutrients and glycemic load. For the multilabel classification approach, we used MobileNetV3Large, a reduced-resource modern CNN with high performance, also well-suited for image recognition tasks (Howard, 2019). It also comes pre-trained (on ImageNet), rendering it suitable for transfer learning. These models require input food images to be resized to 224 × 224 pixels. Data augmentation, such as rotations and brightness adjustments, was implemented to increase model robustness. The model was used for inference to predict carbohydrates, proteins, fats, and glycemic load. Our other LSTM Model for HbA1c prediction was a 50-unit bidirectional LSTM that processed glucose trends both forward and backward. This was followed by a dropout layer with a rate of 0.3, another 30-unit LSTM layer, and a dense layer with 25 nodes using ReLU activation. The final output layer predicts HbA1c levels as a continuous variable. Its input features include time-series glucose data, rolling statistics, wavelet coefficients, and time spent in different
9
243
Made with FlippingBook - professional solution for displaying marketing and sales documents online