| Abstract | Brain tumours are characterised by the uncontrolled growth of abnormal cells within or around the brain. Early and accurate detection of these tumours is imperative for improving patient outcomes, significantly enhanced through rapid and precise tumour segmentation in medical imaging. While manual segmentation is challenging, recent advancements in automatic brain tumour segmentation using deep learning (DL) have significantly improved accuracy. However, many of these models have limitations regarding their application to various datasets and may also raise security concerns about medical data exposure. Our study focuses on enhancing segmentation accuracy and stability using DL on datasets gathered from various magnetic resonance imaging (MRI) devices while maintaining data privacy. We propose 3D CATBraTS, a novel hybrid DL model for brain tumour semantic segmentation on MRIs, based on the state-of-the-art vision transformer (ViT) with a modified convolutional neural network (CNN) encoder. Evaluated on the BraTS 2021 dataset, 3D CATBraTS achieved quantitative measures that surpassed the current state-of-the-art approaches. We further introduce Enhanced Channel Attention Transformer (E-CATBraTS) for Brain Tumour Semantic Segmentation. This enhanced model integrates innovative channel shuffling and channel-wise attention mechanisms to effectively segment brain tumours in multi-modal MRI scans. E-CATBraTS demonstrated significant accuracy improvements, outperforming state-ofthe-art models by a mean Dice Similarity Coefficient (DSC) of 2.6% on various datasets, while maintaining comparable accuracy across others, showcasing its robust segmentation and strong generalisation. Recognising the challenge of broader clinical implementation of these models due to diverse data accessibility and privacy needs, we propose Federated Learning (FL) as a decentralised approach that facilitates collaborative model training.Our study presents Federated E-CATBraTS, an advanced federated deep learning model derived from the original E-CATBraTS framework. This model is specifically designed for segmenting brain tumours from multi-modal MRI while safeguarding data privacy. A key feature is DaQAvg, a novel aggregation method that optimally combines model weights based on data size and quality, demonstrating resilience against corrupted medical images. We evaluated Federated E-CATBraTS using two publicly available datasets, revealing an overall improvement of 6% compared to traditional centralised approaches. Furthermore, DaQAvg exhibited superior robustness and accuracy, achieving approximately 3% better performance under noisy conditions than existing state-of-the-art methods. These findings highlight Federated E-CATBraTS’s potential to enhance brain tumour segmentation while maintaining data privacy and addressing the challenges of data accessibility in medical imaging. |
|---|