M.S. AAI Capstone Chronicles 2024
effectively extracting features from multiple time series that lead to improved classification results. The Transformer self-attention mechanism has demonstrated a unique ability to evaluate global dependencies over long sequences and to focus on the most significant elements of the signal while ignoring others. Other work explored how these architectures could be combined to realize the strengths of both architectures (Zhao et al., 2024). While not directly related to sepsis detection, the task of performing classification on multiple time-series brain signals shares many similar characteristics to the challenge of classifying sepsis from multiple streams of vital sign and lab readings, including the need to deal with multiple streaming features and being robust to missing data readings of some features (Zhao et al., 2024). Experimental Methods Inspired by this work, this project focused on the development of a novel model architecture that combined the feature extraction capabilities of CNNs with the long context understanding of the attention-based transformer. Through experimentation with separate CNN and Transformer models, an optimal base architecture for each was developed and combined into a stacked hybrid model, resulting in the Stacked Convolutional Transformer (SCT) Model architecture. The high level framework of the SCT is illustrated in Figure 3 below.
272
Made with FlippingBook - professional solution for displaying marketing and sales documents online