Explainable AI for Critical Decision-Making: Attention Mechanisms in Medical Imaging and Graph Neural Networks for Microbiome Analysis
Yellu, Sribhuvan Reddy
Citations
Abstract
Explainable artificial intelligence is essential for deploying deep learning in high-stakes biomedical applications where interpretability impacts clinical trust. This thesis advances explainable AI through two projects demonstrating tailored attention mechanisms and domain knowledge integration. The first project develops the Dynamic Contextual Attention Network (DCAN) for colorectal polyp detection. DCAN introduces a three-stage attention mechanism combining spatial attention, gating, and adaptive refinement to focus on polyp regions while filtering illumination artifacts. GradCAM++ visualizations precisely highlight polyp regions without lighting distractions. The second project develops a knowledge-guided graph neural network framework for methanogenic activity prediction in biological wastewater applications for environmental engineering. The approach integrates domain expertise through feature anchoring, protecting critical functional groups during Recursive Feature Elimination. GNNExplainer generates biological interpretations revealing pathway-specific topologies aligned with established microbial ecology. Both projects demonstrate that integrating contextual awareness enables superior performance while maintaining interpretability, establishing practical frameworks for trustworthy AI in healthcare and biotechnology.
