AAI_2025_Capstone_Chronicles_Combined

2

volume calculation, it is difficult to compare reported results across papers or isolate the factors contributing to segmentation or volume error. This fragmentation represents a significant gap in the literature. Current Clinical Workflow and Limitations In routine clinical practice, lung and other solid tumor sizes are often characterized using three linear dimensions - height, width, and depth - measured with on-screen calipers in PACS. When a volume estimate is needed, many clinicians still rely on simple geometric approximations, such as treating the tumor as an ellipsoid or a truncated cone, rather than reconstructing the full 3D shape (Yue et al., 2022). These formulas are easy to apply but assume idealized geometry and therefore perform poorly for irregular, spiculated, or infiltrative tumors. More accurate volume estimation requires slice-by-slice manual contouring of the gross tumor volume, which is widely regarded as the clinical reference standard but is also time-consuming and subject to substantial inter-observer variability (Das et al., 2021). Retrospective analyses comparing different imaging software packages have shown that even when tumor volume is derived from ostensibly similar workflows, differences in contour handling, interpolation, and measurement logic can lead to non-trivial discrepancies in reported gross tumor volume (Stuppner et al., 2020). This variability complicates longitudinal assessment and makes it difficult to compare volumetric results across institutions. Research in automated segmentation and volumetry is active, including work on deep learning–based segmentation and shape prediction in other domains, such as real-time polyp detection and 2D Gaussian shape

KEYWORDS Lung CT; tumor volume estimation; nnU-Net; 3D CNN segmentation; DICOM RTSTRUCT; medical imaging reproducibility; volumetry framework 1 Introduction Accurate quantification of lung tumor volume is clinically important for staging, radiotherapy planning, and response assessment, but remains difficult to achieve consistently in practice. Manual delineation of gross tumor volume (GTV) is known to vary substantially across physicians and institutions, introducing uncertainty into volumetric measurements that directly influence clinical decisions (Das et al., 2021; Van De Steene et al., 2002). Beyond annotation variability, CT acquisition parameters - such as reconstruction kernel, manufacturer, and slice thickness - affect image appearance and can alter radiomics features and segmentation accuracy (Choe et al., 2019; Wang et al., 2024). These sources of variability highlight the need for automated tumor segmentation systems that are both accurate and robust across imaging conditions. Recent work in lung tumor segmentation has focused primarily on model development, particularly deep learning approaches using 3D CNNs, U-Net variants, or transformer-based architectures (Hiraman et al., 2024; Kashyap et al., 2025). However, comparatively little attention has been paid to the reproducibility and standardization of the full volumetry pipeline. Studies often differ not only in model architectures, but in how RTSTRUCTs are voxelized, how DICOM geometry is handled, how volumes are calculated, and how acquisition parameters are controlled. Without consistent preprocessing, ground-truth construction, or

399

Made with FlippingBook - Share PDF online