TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods

8,725 machine learning methods and techniques

AllAudioComputer VisionGeneralGraphsNatural Language ProcessingReinforcement LearningSequential

SLR

Surrogate Lagrangian Relaxation

Please enter a description about the method here

GeneralIntroduced 200092 papers

AMP

Adversarial Model Perturbation

Based on the understanding that the flat local minima of the empirical risk cause the model to generalize better. Adversarial Model Perturbation (AMP) improves generalization via minimizing the AMP loss, which is obtained from the empirical risk by applying the worst norm-bounded perturbation on each point in the parameter space.

GeneralIntroduced 200092 papers

1D CNN

1-Dimensional Convolutional Neural Networks

1D Convolutional Neural Networks are similar to well known and more established 2D Convolutional Neural Networks. 1D Convolutional Neural Networks are used mainly used on text and 1D signals.

Computer VisionIntroduced 200092 papers

Highway Network

A Highway Network is an architecture designed to ease gradient-based training of very deep networks. They allow unimpeded information flow across several layers on "information highways". The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions.

GeneralIntroduced 200091 papers

DeBERTa

DeBERTa is a Transformer-based neural language model that aims to improve the BERT and RoBERTa models with two techniques: a disentangled attention mechanism and an enhanced mask decoder. The disentangled attention mechanism is where each word is represented unchanged using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangle matrices on their contents and relative positions. The enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve model’s generalization on downstream tasks.

Natural Language ProcessingIntroduced 200090 papers

Hard Swish

Hard Swish is a type of activation function based on Swish, but replaces the computationally expensive sigmoid with a piecewise linear analogue:

GeneralIntroduced 200090 papers

FEM

Features Explanation Method

GeneralIntroduced 200090 papers

L1 Regularization

Regularization is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the Norm of the weights: where is a value determining the strength of the penalty. In contrast to weight decay, regularization promotes sparsity; i.e. some parameters have an optimal value of zero. Image Source: Wikipedia/media/File:Sparsityl1.png)

GeneralIntroduced 198690 papers

How do I file a dispute with Expedia?*DisputeFastService

How do I file a dispute with Expedia? To file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction. How do I file a dispute with Expedia? To file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction. How do I file a dispute with Expedia? To file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction. How do I file a dispute with Expedia? To file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction.

GeneralIntroduced 200090 papers

How do I get a human at Expedia immediately? (2025-2026)

How do I get a human at Expedia immediately? (2025 Complete Guide) Most travelers run into a point where self-service isn’t enough, and speaking to a real person becomes the only way forward +1-888-829-0881 Or +1-805-330-4056 in urgent situations. Whether you're dealing with last-minute changes, missed confirmations, or technical errors, direct help always works faster +1-888-829-0881 Or +1-805-330-4056 than virtual support. Once you’ve passed through chatbots and FAQ pages without results, the only productive step left is real communication +1-888-829-0881 Or +1-805-330-4056 with someone who can actually access your booking. Time-sensitive issues like flight cancellations or hotel no-shows require immediate assistance +1-888-829-0881 Or +1-805-330-4056, not generic articles. Many travelers don’t realize that representatives can do far more than you see in the app, from rebooking to refund processing +1-888-829-0881 Or +1-805-330-4056 in real time. If you’re traveling within 24 hours, getting through to someone becomes even more critical +1-888-829-0881 Or +1-805-330-4056 to avoid missing the trip. You might run into situations where your itinerary disappears from the app, even though the reservation was paid +1-888-829-0881 Or +1-805-330-4056 and confirmed earlier. This can lead to panic moments at check-in, which only manual support can fix +1-888-829-0881 Or +1-805-330-4056 with system-level updates. In cases where the airline has made a change but Expedia hasn’t updated it yet, the coordination becomes confusing +1-888-829-0881 Or +1-805-330-4056, especially if you're caught between two sides. A support rep can bridge that gap instantly by verifying the issue +1-888-829-0881 Or +1-805-330-4056 and offering workarounds. Some users also deal with loyalty point problems or gift cards not applying correctly during payment +1-888-829-0881 Or +1-805-330-4056, and these errors aren't easy to fix without backend access. Real agents can issue new codes or adjust balances immediately +1-888-829-0881 Or +1-805-330-4056 without delay. There are moments when refunds take too long or show as processed but never hit the bank, which requires someone to investigate +1-888-829-0881 Or +1-805-330-4056 beyond automated responses. Manual refund confirmation ensures the issue is actually closed +1-888-829-0881 Or +1-805-330-4056 on both ends. Even minor booking errors like the wrong date, time, or passenger name can cause major problems unless corrected early +1-888-829-0881 Or +1-805-330-4056 through a live conversation. It’s better to fix these things immediately than to argue at check-in desks later +1-888-829-0881 Or +1-805-330-4056 under pressure. Travelers with complex bookings—like multi-city trips or multiple passengers—need flexibility that's just not possible through standard tools +1-888-829-0881 Or +1-805-330-4056 on the website. A support team member can piece everything together smoothly +1-888-829-0881 Or +1-805-330-4056 without losing money. If your booking says "pending" or doesn't reflect on the airline's system, it could be a syncing error that only internal teams can resolve +1-888-829-0881 Or +1-805-330-4056 using the right codes. Trying to fly with an invalid ticket is risky, so verifying early is smart +1-888-829-0881 Or +1-805-330-4056. International travelers often face foreign card errors or unfamiliar hotel policies that create confusion +1-888-829-0881 Or +1-805-330-4056 during travel. Speaking to someone saves you from having to guess what's going wrong +1-888-829-0881 Or +1-805-330-4056 when timing is everything. Missing confirmation emails can be a result of typos or spam filters, but only a rep can resend or regenerate them for you +1-888-829-0881 Or +1-805-330-4056 when you're in a hurry. Waiting for chat support wastes precious time before check-in +1-888-829-0881 Or +1-805-330-4056 or departure. Many support actions—like refund escalations, name corrections, or airline requests—are only available to human agents +1-888-829-0881 Or +1-805-330-4056, not bots. That’s why the most efficient travelers skip ahead to live help when needed +1-888-829-0881 Or +1-805-330-4056. Final Thoughts Speaking to someone at Expedia is the only route when things go off track, and the sooner you do it, the faster things get resolved +1-888-829-0881 Or +1-805-330-4056 without added stress. Don’t wait for delays to stack up—just take control through direct communication +1-888-829-0881 Or +1-805-330-4056 and travel confidently.

Natural Language ProcessingIntroduced 200088 papers

Weight Normalization

Weight Normalization is a normalization method for training neural networks. It is inspired by batch normalization, but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients. It reparameterizes each -dimentional weight vector in terms of a parameter vector and a scalar parameter and to perform stochastic gradient descent with respect to those parameters instead. Weight vectors are expressed in terms of the new parameters using: where is a -dimensional vector, is a scalar, and denotes the Euclidean norm of . This reparameterization has the effect of fixing the Euclidean norm of the weight vector : we now have , independent of the parameters .

GeneralIntroduced 200088 papers

VGG-19

Visual Geometry Group 19 Layer CNN

Computer VisionIntroduced 200087 papers

Longformer

Longformer is a modified Transformer architecture. Traditional Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this, Longformer uses an attention pattern that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. The attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. The attention patterns utilised include: sliding window attention, dilated sliding window attention and global + sliding window. These can be viewed in the components section of this page.

Natural Language ProcessingIntroduced 200087 papers

How do I make a claim with Expedia?*Make FastClaimService

How do I make a claim with Expedia? To make a claim with Expedia, contact their support team at +1(888) (829) (0881) OR +1(805) (330) (4056), or use the Help Center to submit details of your issue. While resolving your claim, ask about available discounts—Expedia may offer travel credits, promo codes, or exclusive deals to help compensate for your inconvenience. How do I make a claim with Expedia? To make a claim with Expedia, contact their support team at +1(888) (829) (0881) OR +1(805) (330) (4056), or use the Help Center to submit details of your issue. While resolving your claim, ask about available discounts—Expedia may offer travel credits, promo codes, or exclusive deals to help compensate for your inconvenience. How do I make a claim with Expedia? To make a claim with Expedia, contact their support team at +1(888) (829) (0881) OR +1(805) (330) (4056), or use the Help Center to submit details of your issue. While resolving your claim, ask about available discounts—Expedia may offer travel credits, promo codes, or exclusive deals to help compensate for your inconvenience. How do I make a claim with Expedia? To make a claim with Expedia, contact their support team at +1(888) (829) (0881) OR +1(805) (330) (4056), or use the Help Center to submit details of your issue. While resolving your claim, ask about available discounts—Expedia may offer travel credits, promo codes, or exclusive deals to help compensate for your inconvenience. How do I make a claim with Expedia? To make a claim with Expedia, contact their support team at +1(888) (829) (0881) OR +1(805) (330) (4056), or use the Help Center to submit details of your issue. While resolving your claim, ask about available discounts—Expedia may offer travel credits, promo codes, or exclusive deals to help compensate for your inconvenience.

Natural Language ProcessingIntroduced 200087 papers

How do I complain to Expedia?*ComplainByAgent

How do I complain to Expedia? To make a claim on Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your issue with full booking details. During the process, ask about special offers—Expedia may provide discount codes, travel credits, or promotional deals as part of their resolution and customer satisfaction efforts. How do I complain to Expedia? To make a claim on Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your issue with full booking details. During the process, ask about special offers—Expedia may provide discount codes, travel credits, or promotional deals as part of their resolution and customer satisfaction efforts. How do I complain to Expedia? To make a claim on Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your issue with full booking details. During the process, ask about special offers—Expedia may provide discount codes, travel credits, or promotional deals as part of their resolution and customer satisfaction efforts. How do I complain to Expedia? To make a claim on Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your issue with full booking details. During the process, ask about special offers—Expedia may provide discount codes, travel credits, or promotional deals as part of their resolution and customer satisfaction efforts.

Natural Language ProcessingIntroduced 200086 papers

FixMatch

FixMatch is an algorithm that first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Description from: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence Image credit: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence

GeneralIntroduced 200085 papers

DropConnect

DropConnect generalizes Dropout by randomly dropping the weights rather than the activations with probability . DropConnect is similar to Dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights , rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage. Note that this is not equivalent to setting to be a fixed sparse matrix during training. For a DropConnect layer, the output is given as: Here is the output of a layer, is the input to a layer, are weight parameters, and is a binary matrix encoding the connection information where . Each element of the mask is drawn independently for each example during training, essentially instantiating a different connectivity for each example seen. Additionally, the biases are also masked out during training.

GeneralIntroduced 200084 papers

ASPP

Atrous Spatial Pyramid Pooling

Atrous Spatial Pyramid Pooling (ASPP) is a semantic segmentation module for resampling a given feature layer at multiple rates prior to convolution. This amounts to probing the original image with multiple filters that have complementary effective fields of view, thus capturing objects as well as useful image context at multiple scales. Rather than actually resampling features, the mapping is implemented using multiple parallel atrous convolutional layers with different sampling rates.

Computer VisionIntroduced 200083 papers

A2C

A2C, or Advantage Actor Critic, is a synchronous version of the A3C policy gradient method. As an alternative to the asynchronous implementation of A3C, A2C is a synchronous, deterministic implementation that waits for each actor to finish its segment of experience before updating, averaging over all of the actors. This more effectively uses GPUs due to larger batch sizes. Image Credit: OpenAI Baselines

Reinforcement LearningIntroduced 200082 papers

Procrustes

Procrustes

GeneralIntroduced 200081 papers

SAGA

SAGA is a method in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem.

GeneralIntroduced 200081 papers

SegNet

SegNet is a semantic segmentation model. This core trainable segmentation architecture consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature maps. Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling.

Computer VisionIntroduced 200081 papers

TRPO

Trust Region Policy Optimization

Trust Region Policy Optimization, or TRPO, is a policy gradient method in reinforcement learning that avoids parameter updates that change the policy too much with a KL divergence constraint on the size of the policy update at each iteration. Take the case of off-policy reinforcement learning, where the policy for collecting trajectories on rollout workers is different from the policy to optimize for. The objective function in an off-policy model measures the total advantage over the state visitation distribution and actions, while the mismatch between the training data distribution and the true policy state distribution is compensated with an importance sampling estimator: When training on policy, theoretically the policy for collecting data is same as the policy that we want to optimize. However, when rollout workers and optimizers are running in parallel asynchronously, the behavior policy can get stale. TRPO considers this subtle difference: It labels the behavior policy as and thus the objective function becomes: TRPO aims to maximize the objective function subject to a trust region constraint which enforces the distance between old and new policies measured by KL-divergence to be small enough, within a parameter :

Reinforcement LearningIntroduced 200081 papers

ALS

Adaptive Label Smoothing

GeneralIntroduced 200081 papers

FCOS

FCOS is an anchor-box free, proposal free, single-stage object detection model. By eliminating the predefined set of anchor boxes, FCOS avoids computation related to anchor boxes such as calculating overlapping during training. It also avoids all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance.

Computer VisionIntroduced 200080 papers

Invertible 1x1 Convolution

The Invertible 1x1 Convolution is a type of convolution used in flow-based generative models that reverses the ordering of channels. The weight matrix is initialized as a random rotation matrix. The log-determinant of an invertible 1 × 1 convolution of a tensor with weight matrix is straightforward to compute:

Computer VisionIntroduced 200079 papers

ROCKET

Random Convolutional Kernel Transform

Linear classifier using random convolutional kernels applied to time series.

SequentialIntroduced 200079 papers

Visual Analytics

GeneralIntroduced 200078 papers

How Do I Get a Human at Expedia?+1-805>330>4056.

How Do I Get a Human at Expedia? If you’re having trouble with an Expedia booking—such as a flight change, refund delay, or technical issue—the fastest way to resolve it is to speak directly with a human agent by calling +1-8053304056.. While Expedia offers automated menus and digital support options, reaching a live person ensures your concern is handled promptly and effectively. When you dial +1-8053304056., follow these tips to get past the automated system: ● Press “0” repeatedly or say “Agent” or “Representative” ● Avoid long voice prompts; repeat “Talk to someone” ● Call during off-peak hours (early morning or late evening) These steps can help you bypass long hold times and connect faster with a real Expedia support agent at +1-8053304056.. When to Speak to a Human at Expedia Contacting a live agent at +1-8053304056. is best for the following situations: ● Refunds not received ● Cancellations or itinerary issues ● Name or flight changes ● Missing reservations or booking errors ● Unexpected charges or billing problems While Expedia’s Help Center and app offer helpful tools, calling +1-8053304056. provides more personalized and immediate assistance. What You Should Have Ready Before Calling +1-8053304056. To make your call more efficient, gather the following: ● Your booking confirmation number ● The email address used to make your Expedia reservation ● A clear description of the issue ● Any supporting documents (e.g., screenshots or receipts) This information will help the representative at +1-8053304056. resolve your issue more quickly. Other Ways to Reach a Human at Expedia If you’re unable to connect via phone, here are some alternatives—though phone remains the most direct method via +1-8053304056.: ● Live Chat: Available in the Help Center (expedia.com/help) ● Social Media: Contact Expedia through Twitter (@Expedia) or Facebook ● Callback Requests: Use the Help Center to schedule a callback Even when using online channels, it’s often recommended to follow up with a phone call to +1-8053304056. for thorough resolution. Final Thoughts To get a human at Expedia, call +1-8053304056. and use the voice or keypad shortcuts to reach a live agent. Whether you're dealing with a refund delay, flight issue, or app problem, speaking to someone directly at +1-8053304056. ensures your case is handled with urgency and clarity.

Computer VisionIntroduced 200078 papers

Random Horizontal Flip

RandomHorizontalFlip is a type of image data augmentation which horizontally flips a given image with a given probability. Image Credit: Apache MXNet

Computer VisionIntroduced 200078 papers

CoT Prompting

Chain-of-thought prompting

Chain-of-thought prompts contain a series of intermediate reasoning steps, and they are shown to significantly improve the ability of large language models to perform certain tasks that involve complex reasoning (e.g., arithmetic, commonsense reasoning, symbolic reasoning, etc.)

GeneralIntroduced 200078 papers

Huber loss

The Huber loss function describes the penalty incurred by an estimation procedure f. Huber (1964) defines the loss function piecewise by[1] L δ ( a ) = { 1 2 a 2 for | a | ≤ δ , δ ⋅ ( | a | − 1 2 δ ) , otherwise. {\displaystyle L{\delta }(a)={\begin{cases}{\frac {1}{2}}{a^{2}}&{\text{for }}|a|\leq \delta ,\\\delta \cdot \left(|a|-{\frac {1}{2}}\delta \right),&{\text{otherwise.}}\end{cases}}} This function is quadratic for small values of a, and linear for large values, with equal values and slopes of the different sections at the two points where | a | = δ |a|=\delta . The variable a often refers to the residuals, that is to the difference between the observed and predicted values a = y − f ( x ) a=y-f(x), so the former can be expanded to[2] L δ ( y , f ( x ) ) = { 1 2 ( y − f ( x ) ) 2 for | y − f ( x ) | ≤ δ , δ ⋅ ( | y − f ( x ) | − 1 2 δ ) , otherwise. {\displaystyle L{\delta }(y,f(x))={\begin{cases}{\frac {1}{2}}(y-f(x))^{2}&{\text{for }}|y-f(x)|\leq \delta ,\\\delta \ \cdot \left(|y-f(x)|-{\frac {1}{2}}\delta \right),&{\text{otherwise.}}\end{cases}}} The Huber loss is the convolution of the absolute value function with the rectangular function, scaled and translated. Thus it "smoothens out" the former's corner at the origin. .. math:: \ell(x, y) = L = \{l1, ..., lN\}^T with .. math:: ln = \begin{cases} 0.5 (xn - yn)^2, & \text{if } |xn - yn| < delta \\ delta (|xn - yn| - 0.5 delta), & \text{otherwise } \end{cases}

GeneralIntroduced 200077 papers

LARS

Layer-wise Adaptive Rate Scaling, or LARS, is a large batch optimization technique. There are two notable differences between LARS and other adaptive algorithms such as Adam or RMSProp: first, LARS uses a separate learning rate for each layer and not for each weight. And second, the magnitude of the update is controlled with respect to the weight norm for better control of training speed.

GeneralIntroduced 200077 papers

Griffin-Lim Algorithm

The Griffin-Lim Algorithm (GLA) is a phase reconstruction method based on the redundancy of the short-time Fourier transform. It promotes the consistency of a spectrogram by iterating two projections, where a spectrogram is said to be consistent when its inter-bin dependency owing to the redundancy of STFT is retained. GLA is based only on the consistency and does not take any prior knowledge about the target signal into account. This algorithm expects to recover a complex-valued spectrogram, which is consistent and maintains the given amplitude , by the following alternative projection procedure: where is a complex-valued spectrogram updated through the iteration, is the metric projection onto a set , and is the iteration index. Here, is the set of consistent spectrograms, and is the set of spectrograms whose amplitude is the same as the given one. The metric projections onto these sets and are given by: where represents STFT, is the pseudo inverse of STFT (iSTFT), and are element-wise multiplication and division, respectively, and division by zero is replaced by zero. GLA is obtained as an algorithm for the following optimization problem: where is the Frobenius norm. This equation minimizes the energy of the inconsistent components under the constraint on amplitude which must be equal to the given one. Although GLA has been widely utilized because of its simplicity, GLA often involves many iterations until it converges to a certain spectrogram and results in low reconstruction quality. This is because the cost function only requires the consistency, and the characteristics of the target signal are not taken into account.

AudioIntroduced 198475 papers

Pyramid Pooling Module

A Pyramid Pooling Module is a module for semantic segmentation which acts as an effective global contextual prior. The motivation is that the problem of using a convolutional network like a ResNet is that, while the receptive field is already larger than the input image, the empirical receptive field is much smaller than the theoretical one especially on high-level layers. This makes many networks not sufficiently incorporate the momentous global scenery prior. The PPM is an effective global prior representation that addresses this problem. It contains information with different scales and varying among different sub-regions. Using our 4-level pyramid, the pooling kernels cover the whole, half of, and small portions of the image. They are fused as the global prior. Then we concatenate the prior with the original feature map in the final part.

Computer VisionIntroduced 200075 papers

HRNet

HRNet, or High-Resolution Net, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradually add high-to-low resolution convolution streams one by one, and connect the multi-resolution streams in parallel. The resulting network consists of several ( in the paper) stages and the th stage contains streams corresponding to resolutions. The authors conduct repeated multi-resolution fusions by exchanging the information across the parallel streams over and over.

Computer VisionIntroduced 200075 papers

EWC

Elastic Weight Consolidation

The methon to overcome catastrophic forgetting in neural network while continual learning

GeneralIntroduced 200074 papers

MPNN

Message Passing Neural Network

There are at least eight notable examples of models from the literature that can be described using the Message Passing Neural Networks (MPNN) framework. For simplicity we describe MPNNs which operate on undirected graphs with node features and edge features . It is trivial to extend the formalism to directed multigraphs. The forward pass has two phases, a message passing phase and a readout phase. The message passing phase runs for time steps and is defined in terms of message functions and vertex update functions . During the message passing phase, hidden states at each node in the graph are updated based on messages according to where in the sum, denotes the neighbors of in graph . The readout phase computes a feature vector for the whole graph using some readout function according to The message functions , vertex update functions , and readout function are all learned differentiable functions. operates on the set of node states and must be invariant to permutations of the node states in order for the MPNN to be invariant to graph isomorphism.

GraphsIntroduced 200074 papers

MobileNetV1

MobileNet is a type of convolutional neural network designed for mobile and embedded vision applications. They are based on a streamlined architecture that uses depthwise separable convolutions to build lightweight deep neural networks that can have low latency for mobile and embedded devices.

Computer VisionIntroduced 200074 papers

TransE

TransE is an energy-based model that produces knowledge base embeddings. It models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Relationships are represented as translations in the embedding space: if holds, the embedding of the tail entity should be close to the embedding of the head entity plus some vector that depends on the relationship .

GraphsIntroduced 200073 papers

Adaptive Softmax

Adaptive Softmax is a speedup technique for the computation of probability distributions over words. The adaptive softmax is inspired by the class-based hierarchical softmax, where the word classes are built to minimize the computation time. Adaptive softmax achieves efficiency by explicitly taking into account the computation time of matrix-multiplication on parallel systems and combining it with a few important observations, namely keeping a shortlist of frequent words in the root node and reducing the capacity of rare words.

GeneralIntroduced 200072 papers

Deep Belief Network

A Deep Belief Network (DBN) is a multi-layer generative graphical model. DBNs have bi-directional connections (RBM-type connections) on the top layer while the bottom layers only have top-down connections. They are trained using layerwise pre-training. Pre-training occurs by training the network component by component bottom up: treating the first two layers as an RBM and training, then treating the second layer and third layer as another RBM and training for those parameters. Source: Origins of Deep Learning Image Source: Wikipedia

Computer VisionIntroduced 200971 papers

Barlow Twins

Barlow Twins is a self-supervised learning method that applies redundancy-reduction — a principle first proposed in neuroscience — to self supervised learning. The objective function measures the cross-correlation matrix between the embeddings of two identical networks fed with distorted versions of a batch of samples, and tries to make this matrix close to the identity. This causes the embedding vectors of distorted version of a sample to be similar, while minimizing the redundancy between the components of these vectors. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. Intriguingly it benefits from very high-dimensional output vectors.

GeneralIntroduced 200071 papers

Local SGD

Local SGD is a distributed training technique that runs SGD independently in parallel on different workers and averages the sequences only once in a while.

GeneralIntroduced 200069 papers

ADA

Adaptive Discriminator Augmentation

Computer VisionIntroduced 200068 papers

DeepWalk

DeepWalk learns embeddings (social representations) of a graph's vertices, by modeling a stream of short random walks. Social representations are latent features of the vertices that capture neighborhood similarity and community membership. These latent representations encode social relations in a continuous vector space with a relatively small number of dimensions. It generalizes neural language models to process a special language composed of a set of randomly-generated walks. The goal is to learn a latent representation, not only a probability distribution of node co-occurrences, and so as to introduce a mapping function . This mapping represents the latent social representation associated with each vertex in the graph. In practice, is represented by a matrix of free parameters.

GraphsIntroduced 200067 papers

Adaptive Input Representations

Adaptive Input Embeddings extend the adaptive softmax to input word representations. The factorization assigns more capacity to frequent words and reduces the capacity for less frequent words with the benefit of reducing overfitting to rare words.

Natural Language ProcessingIntroduced 200066 papers

AO

Artemisinin Optimization based on Malaria Therapy: Algorithm and Applications to Medical Image Segmentation

This study proposes an efficient metaheuristic algorithm called the Artemisinin Optimization (AO) algorithm. This algorithm draws inspiration from the process of artemisinin medicine therapy for malaria, which involves the comprehensive eradication of malarial parasites within the human body. AO comprises three optimization stages: a comprehensive eliminations phase simulating global exploration, a local clearance phase for local exploitation, and a post-consolidation phase to enhance the algorithm's ability to escape local optima. In the experimental, this paper conducts a qualitative analysis experiment on the AO, explaining its characteristics in searching for the optimal solution. Subsequently, AO is then tested on the classical IEEE CEC 2014, and the latest IEEE CEC 2022 benchmark function sets to assess its adaptability. Comparative analyses are conducted against eight well-established algorithms and eight high-performance improved algorithms. Statistical analyses of convergence curves and qualitative metrics revealed AO's robust competitiveness. Lastly, the AO is incorporated into breast cancer pathology image segmentation applications. Using 15 authentic medical images at six threshold levels, AO's segmentation performance is compared against eight distinguished algorithms. Experimental results demonstrated AO's superiority in terms of image segmentation accuracy, Feature Similarity Index (FSIM), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM) over the contrast algorithms. These results emphasize AO's efficiency and its potential in real-world optimization applications. The source codes

GeneralIntroduced 200066 papers

Residual GRU

A Residual GRU is a gated recurrent unit (GRU) that incorporates the idea of residual connections from ResNets.

SequentialIntroduced 200066 papers

Early exiting

Early exiting using confidence measures

Exit whenever the model is confident enough allowing early exiting from hidden layers

GeneralIntroduced 200066 papers
PreviousPage 6 of 175Next