TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Doubly Stochastic Subspace Clustering

Doubly Stochastic Subspace Clustering

Derek Lim, René Vidal, Benjamin D. Haeffele

2020-11-30Image ClusteringClustering
PaperPDFCode(official)

Abstract

Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity. Most of the research into these methods focuses on the first step of generating the affinity, which often exploits the self-expressive property of linear subspaces, with little consideration typically given to the spectral clustering step that produces the final clustering. Moreover, existing methods often obtain the final affinity that is used in the spectral clustering step by applying ad-hoc or arbitrarily chosen postprocessing steps to the affinity generated by a self-expressive clustering formulation, which can have a significant impact on the overall clustering performance. In this work, we unify these two steps by learning both a self-expressive representation of the data and an affinity matrix that is well-normalized for spectral clustering. In our proposed models, we constrain the affinity matrix to be doubly stochastic, which results in a principled method for affinity matrix normalization while also exploiting known benefits of doubly stochastic normalization in spectral clustering. We develop a general framework and derive two models: one that jointly learns the self-expressive representation along with the doubly stochastic affinity, and one that sequentially solves for one then the other. Furthermore, we leverage sparsity in the problem to develop a fast active-set method for the sequential solver that enables efficient computation on large datasets. Experiments show that our method achieves state-of-the-art subspace clustering performance on many common datasets in computer vision.

Results

TaskDatasetMetricValueModel
Image ClusteringUMistNMI0.939J-DSSC (Scattered)
Image ClusteringUMistNMI0.935A-DSSC (Scattered)
Image Clusteringcoil-40Accuracy1A-DSSC (Scattered)
Image Clusteringcoil-40NMI1A-DSSC (Scattered)
Image Clusteringcoil-40Accuracy1J-DSSC (Scattered)
Image Clusteringcoil-40NMI1J-DSSC (Scattered)
Image ClusteringExtended Yale-BAccuracy0.924J-DSSC
Image ClusteringExtended Yale-BNMI0.952J-DSSC
Image ClusteringExtended Yale-BAccuracy0.917A-DSSC
Image ClusteringExtended Yale-BNMI0.947A-DSSC
Image Clusteringcoil-100Accuracy0.984A-DSSC (Scattered)
Image Clusteringcoil-100NMI0.997A-DSSC (Scattered)
Image Clusteringcoil-100Accuracy0.961J-DSSC (Scattered)
Image Clusteringcoil-100NMI0.992J-DSSC (Scattered)
Image Clusteringcoil-100Accuracy0.824A-DSSC
Image Clusteringcoil-100NMI0.946A-DSSC
Image Clusteringcoil-100Accuracy0.796J-DSSC
Image Clusteringcoil-100NMI0.943J-DSSC

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18Ranking Vectors Clustering: Theory and Applications2025-07-16Car Object Counting and Position Estimation via Extension of the CLIP-EBC Framework2025-07-11GNN-ViTCap: GNN-Enhanced Multiple Instance Learning with Vision Transformers for Whole Slide Image Classification and Captioning2025-07-09Consistency and Inconsistency in $K$-Means Clustering2025-07-08MC-INR: Efficient Encoding of Multivariate Scientific Simulation Data using Meta-Learning and Clustered Implicit Neural Representations2025-07-03Supercm: Revisiting Clustering for Semi-Supervised Learning2025-06-30Temporal Rate Reduction Clustering for Human Motion Segmentation2025-06-26