TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TagRec: Automated Tagging of Questions with Hierarchical L...

TagRec: Automated Tagging of Questions with Hierarchical Learning Taxonomy

Venktesh V, Mukesh Mohania, Vikram Goyal

2021-07-03Multi-class ClassificationRetrievalQuestion-Answer categorization
PaperPDFCode

Abstract

Online educational platforms organize academic questions based on a hierarchical learning taxonomy (subject-chapter-topic). Automatically tagging new questions with existing taxonomy will help organize these questions into different classes of hierarchical taxonomy so that they can be searched based on the facets like chapter. This task can be formulated as a flat multi-class classification problem. Usually, flat classification based methods ignore the semantic relatedness between the terms in the hierarchical taxonomy and the questions. Some traditional methods also suffer from the class imbalance issues as they consider only the leaf nodes ignoring the hierarchy. Hence, we formulate the problem as a similarity-based retrieval task where we optimize the semantic relatedness between the taxonomy and the questions. We demonstrate that our method helps to handle the unseen labels and hence can be used for taxonomy tagging in the wild. In this method, we augment the question with its corresponding answer to capture more semantic information and then align the question-answer pair's contextualized embedding with the corresponding label (taxonomy) vector representations. The representations are aligned by fine-tuning a transformer based model with a loss function that is a combination of the cosine similarity and hinge rank loss. The loss function maximizes the similarity between the question-answer pair and the correct label representations and minimizes the similarity to unrelated labels. Finally, we perform experiments on two real-world datasets. We show that the proposed learning method outperforms representations learned using the multi-class classification method and other state of the art methods by 6% as measured by Recall@k. We also demonstrate the performance of the proposed method on unseen but related learning content like the learning objectives without re-training the network.

Results

TaskDatasetMetricValueModel
Question-Answer categorizationQC-ScienceR@100.92TagRec(BERT+USE)
Question-Answer categorizationQC-ScienceR@150.95TagRec(BERT+USE)
Question-Answer categorizationQC-ScienceR@200.96TagRec(BERT+USE)
Question-Answer categorizationQC-ScienceR@50.86TagRec(BERT+USE)
Question-Answer categorizationQC-ScienceR@100.93TagRec(BERT+Sent BERT)
Question-Answer categorizationQC-ScienceR@150.95TagRec(BERT+Sent BERT)
Question-Answer categorizationQC-ScienceR@200.97TagRec(BERT+Sent BERT)
Question-Answer categorizationQC-ScienceR@50.85TagRec(BERT+Sent BERT)
Question-Answer categorizationQC-ScienceR@100.89BERT+sent2vec
Question-Answer categorizationQC-ScienceR@150.93BERT+sent2vec
Question-Answer categorizationQC-ScienceR@200.95BERT+sent2vec
Question-Answer categorizationQC-ScienceR@50.79BERT+sent2vec
Question-Answer categorizationQC-ScienceR@100.87BERT+GloVe
Question-Answer categorizationQC-ScienceR@150.92BERT+GloVe
Question-Answer categorizationQC-ScienceR@200.94BERT+GloVe
Question-Answer categorizationQC-ScienceR@50.76BERT+GloVe
Question-Answer categorizationQC-ScienceR@100.86Twin BERT
Question-Answer categorizationQC-ScienceR@150.91Twin BERT
Question-Answer categorizationQC-ScienceR@200.94Twin BERT
Question-Answer categorizationQC-ScienceR@50.72Twin BERT
Question-Answer categorizationQC-ScienceR@100.4Pretrained Sent BERT
Question-Answer categorizationQC-ScienceR@150.47Pretrained Sent BERT
Question-Answer categorizationQC-ScienceR@200.52Pretrained Sent BERT
Question-Answer categorizationQC-ScienceR@50.3Pretrained Sent BERT

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15