TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Independently Recurrent Neural Network (IndRNN): Building ...

Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN

Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, Yanbo Gao

2018-03-13CVPR 2018 6Skeleton Based Action RecognitionSequential Image ClassificationLanguage Modelling
PaperPDFCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCode

Abstract

Recurrent neural networks (RNNs) have been widely used for processing sequential data. However, RNNs are commonly difficult to train due to the well-known gradient vanishing and exploding problems and hard to learn long-term patterns. Long short-term memory (LSTM) and gated recurrent unit (GRU) were developed to address these problems, but the use of hyperbolic tangent and the sigmoid action functions results in gradient decay over layers. Consequently, construction of an efficiently trainable deep network is challenging. In addition, all the neurons in an RNN layer are entangled together and their behaviour is hard to interpret. To address these problems, a new type of RNN, referred to as independently recurrent neural network (IndRNN), is proposed in this paper, where neurons in the same layer are independent of each other and they are connected across layers. We have shown that an IndRNN can be easily regulated to prevent the gradient exploding and vanishing problems while allowing the network to learn long-term dependencies. Moreover, an IndRNN can work with non-saturated activation functions such as relu (rectified linear unit) and be still trained robustly. Multiple IndRNNs can be stacked to construct a network that is deeper than the existing RNNs. Experimental results have shown that the proposed IndRNN is able to process very long sequences (over 5000 time steps), can be used to construct very deep networks (21 layers used in the experiment) and still be trained robustly. Better performances have been achieved on various tasks by using IndRNNs compared with the traditional RNN and LSTM. The code is available at https://github.com/Sunnydreamrain/IndRNN_Theano_Lasagne.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+DAccuracy (CS)81.8Ind-RNN
VideoNTU RGB+DAccuracy (CV)88Ind-RNN
Temporal Action LocalizationNTU RGB+DAccuracy (CS)81.8Ind-RNN
Temporal Action LocalizationNTU RGB+DAccuracy (CV)88Ind-RNN
Zero-Shot LearningNTU RGB+DAccuracy (CS)81.8Ind-RNN
Zero-Shot LearningNTU RGB+DAccuracy (CV)88Ind-RNN
Activity RecognitionNTU RGB+DAccuracy (CS)81.8Ind-RNN
Activity RecognitionNTU RGB+DAccuracy (CV)88Ind-RNN
Language ModellingPenn Treebank (Character Level)Bit per Character (BPC)1.19IndRNN
Action LocalizationNTU RGB+DAccuracy (CS)81.8Ind-RNN
Action LocalizationNTU RGB+DAccuracy (CV)88Ind-RNN
Action DetectionNTU RGB+DAccuracy (CS)81.8Ind-RNN
Action DetectionNTU RGB+DAccuracy (CV)88Ind-RNN
3D Action RecognitionNTU RGB+DAccuracy (CS)81.8Ind-RNN
3D Action RecognitionNTU RGB+DAccuracy (CV)88Ind-RNN
Action RecognitionNTU RGB+DAccuracy (CS)81.8Ind-RNN
Action RecognitionNTU RGB+DAccuracy (CV)88Ind-RNN

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16