TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Quick and Robust Feature Selection: the Strength of Energy...

Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders

Zahra Atashgahi, Ghada Sokar, Tim Van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy

2020-12-01DenoisingDimensionality Reductionfeature selectionClusteringFeature Importance
PaperPDFCodeCode(official)

Abstract

Major complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem. Most of the existing feature selection methods are computationally inefficient; inefficient algorithms lead to high energy consumption, which is not desirable for devices with limited computational and energy resources. In this paper, a novel and flexible method for unsupervised feature selection is proposed. This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance. This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously. We implement QuickSelection in a purely sparse manner as opposed to the typical approach of using a binary mask over connections to simulate sparsity. It results in a considerable speed increase and memory reduction. When tested on several benchmark datasets, including five low-dimensional and three high-dimensional datasets, the proposed method is able to achieve the best trade-off of classification and clustering accuracy, running time, and maximum memory usage, among widely used approaches for feature selection. Besides, our proposed method requires the least amount of energy among the state-of-the-art autoencoder-based feature selection methods.

Results

TaskDatasetMetricValueModel
Dimensionality ReductionEMNISTClassification Accuracy68QS

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17mNARX+: A surrogate model for complex dynamical systems using manifold-NARX and automatic feature selection2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Ranking Vectors Clustering: Theory and Applications2025-07-16Neural Network-Guided Symbolic Regression for Interpretable Descriptor Discovery in Perovskite Catalysts2025-07-16