TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods

66 machine learning methods and techniques

AllAudioComputer VisionGeneralGraphsNatural Language ProcessingReinforcement LearningSequential

intgaussder

integrated Gaussian derivative kernel

Integrated Gaussian derivative kernels are obtained by integrating the continuous Gaussian derivative kernels over each pixel support region. In this way, some of the severe artefacts of sampling the Gaussian derivative kernels at too fine scales can be reduced.

SequentialIntroduced 20001 papers

Associative LSTM

An Associative LSTM combines an LSTM with ideas from Holographic Reduced Representations (HRRs) to enable key-value storage of data. HRRs use a “binding” operator to implement key-value binding between two vectors (the key and its associated content). They natively implement associative arrays; as a byproduct, they can also easily implement stacks, queues, or lists.

SequentialIntroduced 20001 papers

CNN-TS

A Deep Convolutional Neural Network for Time Series Classification with Intermediate Targets

CNN-TS: A Deep Convolutional Neural Network for Time Series Classification This method leverages deep convolutional neural networks to classify time series data with intermediate targets. It is designed to improve accuracy in time series analysis.

SequentialIntroduced 20001 papers

TaLK Convolution

Time-aware Large Kernel Convolution

A Time-aware Large Kernel (TaLK) convolution is a type of temporal convolution that learns the kernel size of a summation kernel for each time-step instead of learning the kernel weights as in a typical convolution operation. For each time-step, a function is responsible for predicting the appropriate size of neighbor representations to use in the form of left and right offsets relative to the time-step.

SequentialIntroduced 20001 papers

NearestAdvocate

Nearest Advocate

This package focuses on the time delay estimation between two event-based time-series that are relatively shifted by an unknown time offset. An event-based time-series is given by a set of timestamps of certain events. If you want to guarantee synchronous measurements in advance or estimate the time delay of continuous measurements sampled at a constant rate, you might want to use other methods. However, in some use cases, performing an event detection and then estimating the relative time delay has advantages. The Nearest Advocate method provides a precise time delay estimation in event-based time-series that is robust against imprecise timestamps, a high fraction of missing events, and clock drift. Time delay estimation also known as the correction of time offsets and time lags as well as time synchronization.

SequentialIntroduced 20001 papers

Temporal Distribution Characterization

Temporal Distribution Characterization, or TDC, is a module used in the AdaRNN architecture to characterize the distributional information in a time series. Based on the principle of maximum entropy, maximizing the utilization of shared knowledge underlying a times series under temporal covariate shift can be done by finding periods which are most dissimilar to each other, which is also considered as the worst case of temporal covariate shift since the cross-period distributions are the most diverse. TDC achieves this goal for splitting the time-series by solving an optimization problem whose objective can be formulated as: where is a distance metric, and are predefined parameters to avoid trivial solutions (e.g., very small values or very large values may fail to capture the distribution information), and is the hyperparameter to avoid over-splitting. The metric above can be any distance function, e.g., Euclidean or Editing distance, or some distribution-based distance / divergence, like MMD [14] and KL-divergence. The learning goal of the optimization problem (1) is to maximize the averaged period-wise distribution distances by searching and the corresponding periods so that the distributions of each period are as diverse as possible and the learned prediction model has better a more generalization ability.

SequentialIntroduced 20001 papers

TSRUc

TSRUc, or Transformation-based Spatial Recurrent Unit c, is a modification of a ConvGRU used in the TriVD-GAN architecture for video generation. Instead of computing the reset gate and resetting , the TSRUc computes the parameters of a transformation , which we use to warp . The rest of our model is unchanged (with playing the role of in ’s update equation from ConvGRU. The TSRUc module is described by the following equations: In these equations and are the elementwise sigmoid and ReLU functions respectively and the represents a convolution with a kernel of size . Brackets are used to represent a feature concatenation.

SequentialIntroduced 20001 papers

mRNN

Multiplicative RNN

A Multiplicative RNN (mRNN) is a type of recurrent neural network with multiplicative connections. In a standard RNN, the current input is first transformed via the visible-to-hidden weight matrix and then contributes additively to the input for the current hidden state. An mRNN allows the current input (a character in the original example) to affect the hidden state dynamics by determining the entire hidden-to-hidden matrix (which defines the non-linear dynamics) in addition to providing an additive bias. To achieve this goal, the authors modify the RNN so that its hidden-to-hidden weight matrix is a (learned) function of the current input : This is the same as the equations for a standard RNN, except that is replaced with . allowing each input (character) to specify a different hidden-to-hidden weight matrix.

SequentialIntroduced 20111 papers

TSRUs

TSRUs, or Transformation-based Spatial Recurrent Unit p, is a modification of a ConvGRU used in the TriVD-GAN architecture for video generation. It largely follows TSRUc, but computes each intermediate output in a fully sequential manner: like in TSRUc, is given access to , but additionally, is given access to both outputs and , so as to make an informed decision prior to mixing. This yields the following replacement for : In these equations and are the elementwise sigmoid and ReLU functions respectively and the represents a convolution with a kernel of size . Brackets are used to represent a feature concatenation.

SequentialIntroduced 20001 papers

mBARTHez

BARThez is a self-supervised transfer learning model for the French language based on BART. Compared to existing BERT-based French language models such as CamemBERT and FlauBERT, BARThez is well-suited for generative tasks, since not only its encoder but also its decoder is pretrained.

SequentialIntroduced 20001 papers

Cyclic Transformer

Please enter a description about the method here

SequentialIntroduced 20001 papers

srBTAW (BTW)

Self-regularizing Boundary Time and Amplitude Warping

SequentialIntroduced 20001 papers

Pointer Sentinel-LSTM

The Pointer Sentinel-LSTM mixture model is a type of recurrent neural network that combines the advantages of standard softmax classifiers with those of a pointer component for effective and efficient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, the model allows the pointer component itself to decide when to use the softmax vocabulary through a sentinel.

SequentialIntroduced 20001 papers

WaveTTS

WaveTTS is a Tacotron-based text-to-speech architecture that has two loss functions: 1) time-domain loss, denoted as the waveform loss, that measures the distortion between the natural and generated waveform; and 2) frequency-domain loss, that measures the Mel-scale acoustic feature loss between the natural and generated acoustic features. The motivation arises from Tacotron 2. Here its feature prediction network is trained independently of the WaveNet vocoder. At run-time, the feature prediction network and WaveNet vocoder are artificially joined together. As a result, the framework suffers from the mismatch between frequency-domain acoustic features and time-domain waveform. To overcome such mismatch, WaveTTS uses a joint time-frequency domain loss for TTS that effectively improves the synthesized voice quality.

SequentialIntroduced 20001 papers

timecausgabor

time-causal and time-recursive analogue of the Gabor transform

The time-causal and time-recursive analogue of the Gabor transform provides a way to define a Gabor-like time-frequency analysis for real-time signals, for which the future cannot be accessed. This is achieved by choosing the temporal window function in a windowed Fourier transform as the time-causal limit kernel, which is a temporal kernel that is (i) time-causal, (ii) time-recursive and (iii) obeys temporal scale covariance.

SequentialIntroduced 20001 papers

AdaRNN

AdaRNN is an adaptive RNN that learns an adaptive model through two modules: Temporal Distribution Characterization (TDC) and Temporal Distribution Matching (TDM) algorithms. Firstly, to better characterize the distribution information in time-series, TDC splits the training data into most diverse periods that have a large distribution gap inspired by the principle of maximum entropy. After that, a temporal distribution matching (TDM) algorithm is used to dynamically reduce distribution divergence using a RNN-based model.

SequentialIntroduced 20001 papers
PreviousPage 2 of 2