TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Optimizing Large Language Models for OpenAPI Code Completion

Optimizing Large Language Models for OpenAPI Code Completion

Bohdan Petryshyn, Mantas Lukoševičius

2024-05-24OpenAPI code completionCode CompletionPrompt EngineeringCode Generation
PaperPDFCode(official)Code(official)

Abstract

Recent advancements in Large Language Models (LLMs) and their utilization in code generation tasks have significantly reshaped the field of software development. Despite the remarkable efficacy of code completion solutions in mainstream programming languages, their performance lags when applied to less ubiquitous formats such as OpenAPI definitions. This study evaluates the OpenAPI completion performance of GitHub Copilot, a prevalent commercial code completion tool, and proposes a set of task-specific optimizations leveraging Meta's open-source model Code Llama. A semantics-aware OpenAPI completion benchmark proposed in this research is used to perform a series of experiments through which the impact of various prompt-engineering and fine-tuning techniques on the Code Llama model's performance is analyzed. The fine-tuned Code Llama model reaches a peak correctness improvement of 55.2% over GitHub Copilot despite utilizing 25 times fewer parameters than the commercial solution's underlying Codex model. Additionally, this research proposes an enhancement to a widely used code infilling training technique, addressing the issue of underperformance when the model is prompted with context sizes smaller than those used during training. The dataset, the benchmark, and the model fine-tuning code are made publicly available.

Results

TaskDatasetMetricValueModel
Code CompletionOpenAPI completion refinedCorrectness, avg., %34Code Llama 7B, fine-tuned with document splitting
Code CompletionOpenAPI completion refinedCorrectness, max., %42Code Llama 7B, fine-tuned with document splitting
Code CompletionOpenAPI completion refinedValidness, avg., %69.1Code Llama 7B, fine-tuned with document splitting
Code CompletionOpenAPI completion refinedValidness, max., %76Code Llama 7B, fine-tuned with document splitting
Code CompletionOpenAPI completion refinedCorrectness, avg., %32Code Llama 7B, fine-tuned at 4096 tokens
Code CompletionOpenAPI completion refinedCorrectness, max., %45Code Llama 7B, fine-tuned at 4096 tokens
Code CompletionOpenAPI completion refinedValidness, avg., %63.1Code Llama 7B, fine-tuned at 4096 tokens
Code CompletionOpenAPI completion refinedValidness, max., %84Code Llama 7B, fine-tuned at 4096 tokens
Code CompletionOpenAPI completion refinedCorrectness, avg., %31.1Code Llama 7B
Code CompletionOpenAPI completion refinedCorrectness, max., %36Code Llama 7B
Code CompletionOpenAPI completion refinedValidness, avg., %60.7Code Llama 7B
Code CompletionOpenAPI completion refinedValidness, max., %64Code Llama 7B
Code CompletionOpenAPI completion refinedCorrectness, avg., %29GitHub Copilot
Code CompletionOpenAPI completion refinedCorrectness, max., %29GitHub Copilot
Code CompletionOpenAPI completion refinedValidness, avg., %68GitHub Copilot
Code CompletionOpenAPI completion refinedValidness, max., %68GitHub Copilot
OpenAPI code completionOpenAPI completion refinedCorrectness, avg., %34Code Llama 7B, fine-tuned with document splitting
OpenAPI code completionOpenAPI completion refinedCorrectness, max., %42Code Llama 7B, fine-tuned with document splitting
OpenAPI code completionOpenAPI completion refinedValidness, avg., %69.1Code Llama 7B, fine-tuned with document splitting
OpenAPI code completionOpenAPI completion refinedValidness, max., %76Code Llama 7B, fine-tuned with document splitting
OpenAPI code completionOpenAPI completion refinedCorrectness, avg., %32Code Llama 7B, fine-tuned at 4096 tokens
OpenAPI code completionOpenAPI completion refinedCorrectness, max., %45Code Llama 7B, fine-tuned at 4096 tokens
OpenAPI code completionOpenAPI completion refinedValidness, avg., %63.1Code Llama 7B, fine-tuned at 4096 tokens
OpenAPI code completionOpenAPI completion refinedValidness, max., %84Code Llama 7B, fine-tuned at 4096 tokens
OpenAPI code completionOpenAPI completion refinedCorrectness, avg., %31.1Code Llama 7B
OpenAPI code completionOpenAPI completion refinedCorrectness, max., %36Code Llama 7B
OpenAPI code completionOpenAPI completion refinedValidness, avg., %60.7Code Llama 7B
OpenAPI code completionOpenAPI completion refinedValidness, max., %64Code Llama 7B
OpenAPI code completionOpenAPI completion refinedCorrectness, avg., %29GitHub Copilot
OpenAPI code completionOpenAPI completion refinedCorrectness, max., %29GitHub Copilot
OpenAPI code completionOpenAPI completion refinedValidness, avg., %68GitHub Copilot
OpenAPI code completionOpenAPI completion refinedValidness, max., %68GitHub Copilot

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18Leveraging Language Prior for Infrared Small Target Detection2025-07-17Emotional Support with LLM-based Empathetic Dialogue Generation2025-07-17Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks2025-07-16Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training2025-07-16The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs2025-07-15Kodezi Chronos: A Debugging-First Language Model for Repository-Scale, Memory-Driven Code Understanding2025-07-14