Objective: Cancer survival prediction plays a vital role in enhancing medical decision-making and optimizing patient management. Accurate survival estimation enables healthcare providers to develop personalized treatment plans, improve treatment outcomes, and identify high-risk patients for timely intervention. However, existing methods often rely on single-modality data or suffer from excessive computational complexity, limiting their practical application and the full potential of multimodal integration.
Methods: To address these challenges, we propose a novel multimodal survival prediction framework that integrates Whole Slide Image (WSI) and genomic data. The framework employs attention mechanisms to model intra-modal and inter-modal correlations, effectively capturing complex dependencies within and between modalities. Additionally, locality-sensitive hashing is applied to optimize the self-attention mechanism, significantly reducing computational costs while maintaining predictive performance, enabling the model to handle large-scale or high-resolution WSI datasets efficiently.
Results: Extensive experiments on the TCGA-BLCA dataset validate the effectiveness of the proposed approach. The results demonstrate that integrating WSI and genomic data improves survival prediction accuracy compared to unimodal methods. The optimized self-attention mechanism further enhances model efficiency, allowing for practical implementation on large datasets.
Conclusion: The proposed framework provides a robust and efficient solution for cancer survival prediction by leveraging multimodal data integration and optimized attention mechanisms. This study highlights the importance of multimodal learning in medical applications and offers a promising direction for future advancements in AI-driven clinical decision support systems.
Keywords: Attention mechanisms; Cancer survival prediction; Genomic data; Multimodal prediction; WSI.
Copyright © 2025. Published by Elsevier Inc.