Hierarchical Variational Autoencoder for Visual Counterfactuals
Nicolas Vercheval, Aleksandra Pizurica
Abstract
Conditional Variational Auto Encoders (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool. The codes in the latent space provide a theoretically sound way to produce counterfactuals, i.e. alterations resulting from an intervention on a targeted semantic feature. To be applied on real images more complex models are needed, such as Hierarchical CVAE. This comes with a challenge as the naive conditioning is no longer effective. In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals and we introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
Related Papers
NeuroXAI: Adaptive, robust, explainable surrogate framework for determination of channel importance in EEG application2025-09-12Explainable Artificial Intelligence in Biomedical Image Analysis: A Comprehensive Survey2025-07-09From Motion to Meaning: Biomechanics-Informed Neural Network for Explainable Cardiovascular Disease Identification2025-07-08Can "consciousness" be observed from large language model (LLM) internal states? Dissecting LLM representations obtained from Theory of Mind test with Integrated Information Theory and Span Representation analysis2025-06-26Towards Transparent AI: A Survey on Explainable Large Language Models2025-06-26IXAII: An Interactive Explainable Artificial Intelligence Interface for Decision Support Systems2025-06-26Communicating Smartly in the Molecular Domain: Neural Networks in the Internet of Bio-Nano Things2025-06-25Towards Interpretable and Efficient Feature Selection in Trajectory Datasets: A Taxonomic Approach2025-06-25