AAI_2025_Capstone_Chronicles_Combined
2
which can be time-consuming and subjective. This subjectivity stems from qualitative descriptors such as labeling conditions and injuries as mild, moderate, and severe (Ruiz Santiago et al., 2022), as well as variability in the skill and experience of the medical professionals (Liu et al., 2022). This project focuses on developing a computer vision model that can automatically detect cervical spine fractures and determine whether the bone has fused properly after injury or surgical treatment. Models such as convolutional neural networks (CNNs) have been shown to achieve promising results in cervical spine fracture detection (Small et al., 2021). This model aims to address the problem of delayed and inconsistent diagnosis by providing an automated, high-accuracy model that distinguishes between fractured and normal cervical spine images. We will explore CNN architectures such as a simple CNN and Faster R-CNN, as well as transformer models such as DETR. We implement and compare how these models classify medical images, detect fractures, and highlight visual features that indicate bone discontinuity, misalignment, or incomplete fusion. Cervical spine fractures are among the most severe skeletal injuries, and early detection can prevent permanent disability (Waseem et al., 2025). An AI-assisted screening system could help radiologists increase diagnostic speed, reduce fatigue, and improve consistency (Khalifa & Albadawy, 2024). Our model can be integrated into post-operative monitoring applications to support patient spinal health evaluation. Primary end users of our AI model would include radiologists or spine surgeons, hospital administrators and imaging centers, and other developers of AI-integrated surgical platforms. The model can be used as a diagnostic support tool, can improve workflow efficiency, and assist in real-time or remote image analysis. In the future, this model could also support medical training and education by providing explainable
visualizations that show how AI identifies specific fracture or fusion patterns. The data for this project is the Radiological Society of North America (RSNA) 2022 Cervical Spine Fracture Detection featured code competition dataset, and is open-sourced from Kaggle (RSNA, 2022). The dataset contains thousands of labeled CT scan slices of cervical spine categorized as fractured or normal (Lin et al., 2023). In a live healthcare system, data would be sourced from hospital image storing systems. All patient medical images would be anonymized to ensure patient privacy and go through extensive preprocessing before traversing the AI pipeline. The ultimate goal of this project is to develop a robust deep learning model that can accurately detect cervical spine fractures and evaluate bone fusion. The project will include a model exploration phase, where multiple models will be implemented and compared. A high performing model can be used for an AI-assisted diagnostic tool that integrates into hospital imaging software or cloud-based medical platforms. Users will be able to upload a CT scan of the cervical spine (neck), receive a “fracture” or “no fracture” diagnosis, as well as a bounding box drawn around where the model believes a fracture is present. This approach promotes transparency, improves clinician confidence, and supports faster and more reliable diagnosis. 2 Data Summary The dataset used is a collection of 2D X-ray images of the spine and is a labeled subset of a competition dataset (RSNA, 2022). Variables in the dataset include images, labels, bounding boxes, patient ID, slice number, and split. The subset consists of 28,868 images which were used as input to our models. There are two labels for our binary classification task. A value of 0 indicates no fracture and a value of 1 indicates a fracture is present in the image. The bounding box variable contains the target coordinates for
305
Made with FlippingBook - Share PDF online