TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Identification of the Relevance of Comments in Codes Using...

Identification of the Relevance of Comments in Codes Using Bag of Words and Transformer Based Models

Sruthi S, Tanmay Basu

2023-08-11Text ClassificationFeature EngineeringInformation Retrievaltext-classificationBinary text classification
PaperPDFCode(official)

Abstract

The Forum for Information Retrieval (FIRE) started a shared task this year for classification of comments of different code segments. This is binary text classification task where the objective is to identify whether comments given for certain code segments are relevant or not. The BioNLP-IISERB group at the Indian Institute of Science Education and Research Bhopal (IISERB) participated in this task and submitted five runs for five different models. The paper presents the overview of the models and other significant findings on the training corpus. The methods involve different feature engineering schemes and text classification techniques. The performance of the classical bag of words model and transformer-based models were explored to identify significant features from the given training corpus. We have explored different classifiers viz., random forest, support vector machine and logistic regression using the bag of words model. Furthermore, the pre-trained transformer based models like BERT, RoBERT and ALBERT were also used by fine-tuning them on the given training corpus. The performance of different such models over the training corpus were reported and the best five models were implemented on the given test corpus. The empirical results show that the bag of words model outperforms the transformer based models, however, the performance of our runs are not reasonably well in both training and test corpus. This paper also addresses the limitations of the models and scope for further improvement.

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17From Chaos to Automation: Enabling the Use of Unstructured Data for Robotic Process Automation2025-07-15GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text Representation2025-07-10Temporal Information Retrieval via Time-Specifier Model Merging2025-07-09Efficiency-Effectiveness Reranking FLOPs for LLM-based Rerankers2025-07-08An analysis of vision-language models for fabric retrieval2025-07-07Graph Collaborative Attention Network for Link Prediction in Knowledge Graphs2025-07-05