TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ProgressiveSpinalNet architecture for FC layers

ProgressiveSpinalNet architecture for FC layers

Praveen Chopra

2021-03-21Decision MakingFine-Grained Image Classification
PaperPDFCode(official)

Abstract

In deeplearning models the FC (fully connected) layer has biggest important role for classification of the input based on the learned features from previous layers. The FC layers has highest numbers of parameters and fine-tuning these large numbers of parameters, consumes most of the computational resources, so in this paper it is aimed to reduce these large numbers of parameters significantly with improved performance. The motivation is inspired from SpinalNet and other biological architecture. The proposed architecture has a gradient highway between input to output layers and this solves the problem of diminishing gradient in deep networks. In this all the layers receives the input from previous layers as well as the CNN layer output and this way all layers contribute in decision making with last layer. This approach has improved classification performance over the SpinalNet architecture and has SOTA performance on many datasets such as Caltech101, KMNIST, QMNIST and EMNIST. The source code is available at https://github.com/praveenchopra/ProgressiveSpinalNet.

Results

TaskDatasetMetricValueModel
Image ClassificationEMNIST-LettersAccuracy95.86VGG-5
Image ClassificationEMNIST-LettersAccuracy95.86VGG-5
Image ClassificationCaltech-101Accuracy97.76Pre trained wide-resnet-101
Image ClassificationFruits-360Accuracy99.97Pre trained wide-resnet-101
Image ClassificationKuzushiji-MNISTAccuracy98.98VGG-5
Image ClassificationEMNIST-DigitsAccuracy99.82VGG-5
Image ClassificationQMNISTAccuracy99.6867VGG-5
Image ClassificationSTL-10Accuracy98.18Pre trained wide-resnet-101
Image ClassificationMNISTAccuracy98.19Vanilla FC layer only
Image ClassificationBird-225Accuracy99.55Pre trained wide-resnet-101
Fine-Grained Image ClassificationEMNIST-LettersAccuracy95.86VGG-5
Fine-Grained Image ClassificationCaltech-101Accuracy97.76Pre trained wide-resnet-101
Fine-Grained Image ClassificationFruits-360Accuracy99.97Pre trained wide-resnet-101
Fine-Grained Image ClassificationKuzushiji-MNISTAccuracy98.98VGG-5
Fine-Grained Image ClassificationEMNIST-DigitsAccuracy99.82VGG-5
Fine-Grained Image ClassificationQMNISTAccuracy99.6867VGG-5
Fine-Grained Image ClassificationSTL-10Accuracy98.18Pre trained wide-resnet-101
Fine-Grained Image ClassificationMNISTAccuracy98.19Vanilla FC layer only
Fine-Grained Image ClassificationBird-225Accuracy99.55Pre trained wide-resnet-101

Related Papers

Graph-Structured Data Analysis of Component Failure in Autonomous Cargo Ships Based on Feature Fusion2025-07-18Higher-Order Pattern Unification Modulo Similarity Relations2025-07-17Exploiting Constraint Reasoning to Build Graphical Explanations for Mixed-Integer Linear Programming2025-07-17Acting and Planning with Hierarchical Operational Models on a Mobile Robot: A Study with RAE+UPOM2025-07-15CogDDN: A Cognitive Demand-Driven Navigation with Decision Optimization and Dual-Process Thinking2025-07-15Detección y Cuantificación de Erosión Fluvial con Visión Artificial2025-07-15Guiding LLM Decision-Making with Fairness Reward Models2025-07-15Turning Sand to Gold: Recycling Data to Bridge On-Policy and Off-Policy Learning via Causal Bound2025-07-15