Rathore, Dhruv Vansraj2025-07-152025-07-152025-0637p.http://hdl.handle.net/10263/7558Dissertation under the supervision of Prof. Utpal GarainDeep learning models often deliver high predictive accuracy; however, their lack of interpretability can hinder their adoption in critical fields such as healthcare and finance. This thesis explores the concept of Intrinsic Causal Contribution (ICC), a novel method for explaining neural network predictions by quantifying each input feature’s intrinsic causal influence on the output, independent of correlated effects. ICC models the network as a Structural Causal Model and employs Causal Normalizing Flows to handle complex dependencies, with efficient estimation via the Jansen Estimator. Analysis on both synthetic and real data sets provides evidence that ICC produces faithful, interpretable attributions, often outperforming traditional approaches like SHAP and LIME. By revealing truly influential features, ICC supports transparent and responsible AI, especially in sensitive settings such as medical diagnosis.enIntrinsic Causal Contribution (ICC),SHAPLIMEDeep Learning SystemsCausal Explanations in Deep Learning SystemsOther