TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/GraphCodeBERT: Pre-training Code Representations with Data...

GraphCodeBERT: Pre-training Code Representations with Data Flow

Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, Ming Zhou

2020-09-17ICLR 2021 1Code TranslationMasked Language ModelingCode CompletionClone DetectionType predictionCode SummarizationCode SearchSource Code SummarizationLanguage Modelling
PaperPDFCode(official)

Abstract

Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code. Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. Such a semantic-level structure is neat and does not bring an unnecessarily deep hierarchy of AST, the property of which makes the model more efficient. We develop GraphCodeBERT based on Transformer. In addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. One is to predict code structure edges, and the other is to align representations between source code and code structure. We implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks. We further show that the model prefers structure-level attentions over token-level attentions in the task of code search.

Results

TaskDatasetMetricValueModel
Program SynthesisManyTypes4TypeScriptAverage Accuracy62.51GraphCodeBERT
Program SynthesisManyTypes4TypeScriptAverage F160.57GraphCodeBERT
Program SynthesisManyTypes4TypeScriptAverage Precision60.06GraphCodeBERT
Program SynthesisManyTypes4TypeScriptAverage Recall61.08GraphCodeBERT
Code SearchCodeSearchNetGo84.1GraphCodeBERT
Code SearchCodeSearchNetJS71.1GraphCodeBERT
Code SearchCodeSearchNetJava75.7GraphCodeBERT
Code SearchCodeSearchNetOverall77.4GraphCodeBERT
Code SearchCodeSearchNetPHP72.5GraphCodeBERT
Code SearchCodeSearchNetPython87.9GraphCodeBERT
Code SearchCodeSearchNetRuby73.2GraphCodeBERT
Type predictionManyTypes4TypeScriptAverage Accuracy62.51GraphCodeBERT
Type predictionManyTypes4TypeScriptAverage F160.57GraphCodeBERT
Type predictionManyTypes4TypeScriptAverage Precision60.06GraphCodeBERT
Type predictionManyTypes4TypeScriptAverage Recall61.08GraphCodeBERT

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16