Abstract
We explore the role of torsion in hybrid deep learning models that incorporate topological data analysis, focusing on autoencoders. While most TDA tools use field coefficients, this conceals torsional features present in integer homology. We show that torsion can be lost during encoding, altered in the latent space, and in many cases, not reconstructed by standard decoders. Using both synthetic and high-dimensional data, we evaluate torsion sensitivity to perturbations and assess its recoverability across several autoencoder architectures. Our findings reveal key limitations of field-based approaches and underline the need for architectures or loss terms that preserve torsional information for robust data representation.
Related Papers
Lipschitz Bounds for Persistent Laplacian Eigenvalues under One-Simplex Insertions2025-06-26The Shape of Consumer Behavior: A Symbolic and Topological Analysis of Time Series2025-06-24TDACloud: Point Cloud Recognition Using Topological Data Analysis2025-06-23An Incremental Framework for Topological Dialogue Semantics: Efficient Reasoning in Discrete Spaces2025-05-31Comparing the Effects of Persistence Barcodes Aggregation and Feature Concatenation on Medical Imaging2025-05-29Topological Machine Learning for Protein-Nucleic Acid Binding Affinity Changes Upon Mutation2025-05-28Topological Deep Learning for Speech Data2025-05-27Holes in Latent Space: Topological Signatures Under Adversarial Influence2025-05-26