TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Few-shot Relational Reasoning via Connection Subgraph Pret...

Few-shot Relational Reasoning via Connection Subgraph Pretraining

Qian Huang, Hongyu Ren, Jure Leskovec

2022-10-13Meta-LearningFew-Shot Image ClassificationRelational Reasoning
PaperPDFCode(official)

Abstract

Few-shot knowledge graph (KG) completion task aims to perform inductive reasoning over the KG: given only a few support triplets of a new relation $\bowtie$ (e.g., (chop,$\bowtie$,kitchen), (read,$\bowtie$,library), the goal is to predict the query triplets of the same unseen relation $\bowtie$, e.g., (sleep,$\bowtie$,?). Current approaches cast the problem in a meta-learning framework, where the model needs to be first jointly trained over many training few-shot tasks, each being defined by its own relation, so that learning/prediction on the target few-shot task can be effective. However, in real-world KGs, curating many training tasks is a challenging ad hoc process. Here we propose Connection Subgraph Reasoner (CSR), which can make predictions for the target few-shot task directly without the need for pre-training on the human curated set of training tasks. The key to CSR is that we explicitly model a shared connection subgraph between support and query triplets, as inspired by the principle of eliminative induction. To adapt to specific KG, we design a corresponding self-supervised pretraining scheme with the objective of reconstructing automatically sampled connection subgraphs. Our pretrained model can then be directly applied to target few-shot tasks on without the need for training few-shot tasks. Extensive experiments on real KGs, including NELL, FB15K-237, and ConceptNet, demonstrate the effectiveness of our framework: we show that even a learning-free implementation of CSR can already perform competitively to existing methods on target few-shot tasks; with pretraining, CSR can achieve significant gains of up to 52% on the more challenging inductive few-shot tasks where the entities are also unseen during (pre)training.

Results

TaskDatasetMetricValueModel
Image ClassificationImageNet-FS (5-shot, all)Top-5 Accuracy (%)77.6KTCH (ResNet-50)
Image ClassificationImageNet-FS (1-shot, novel)Top-5 Accuracy (%)58.1KTCH (ResNet-50)
Image ClassificationImageNet-FS (2-shot, novel)Top-5 Accuracy (%)67.3KTCH (ResNet-50)
Few-Shot Image ClassificationImageNet-FS (5-shot, all)Top-5 Accuracy (%)77.6KTCH (ResNet-50)
Few-Shot Image ClassificationImageNet-FS (1-shot, novel)Top-5 Accuracy (%)58.1KTCH (ResNet-50)
Few-Shot Image ClassificationImageNet-FS (2-shot, novel)Top-5 Accuracy (%)67.3KTCH (ResNet-50)

Related Papers

Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Imbalanced Regression Pipeline Recommendation2025-07-16CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Mixture of Experts in Large Language Models2025-07-15Iceberg: Enhancing HLS Modeling with Synthetic Data2025-07-14Meta-Reinforcement Learning for Fast and Data-Efficient Spectrum Allocation in Dynamic Wireless Networks2025-07-13ViT-ProtoNet for Few-Shot Image Classification: A Multi-Benchmark Evaluation2025-07-12Geo-ORBIT: A Federated Digital Twin Framework for Scene-Adaptive Lane Geometry Detection2025-07-11