TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MoTCoder: Elevating Large Language Models with Modular of ...

MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks

Jingyao Li, Pengguang Chen, Bin Xia, Hong Xu, Jiaya Jia

2023-12-26Code Generation
PaperPDFCode(official)

Abstract

Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Module-of-Thought Coder (MoTCoder). We introduce a framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules. Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial pass@1 improvements of 5.9% on APPS and 5.8% on CodeContests. MoTCoder also achieved significant improvements in self-correction capabilities, surpassing the current SOTA by 3.3%. Additionally, we provide an analysis of between problem complexity and optimal module decomposition and evaluate the maintainability index, confirming that the code generated by MoTCoder is easier to understand and modify, which can be beneficial for long-term code maintenance and evolution. Our codes are available at https://github.com/dvlab-research/MoTCoder.

Results

TaskDatasetMetricValueModel
Code GenerationAPPSCompetition Pass@127.84MoTCoder-32B-V1.5
Code GenerationAPPSInterview Pass@144.49MoTCoder-32B-V1.5
Code GenerationAPPSIntroductory Pass@168.44MoTCoder-32B-V1.5
Code GenerationAPPSCompetition Pass@121.18MoTCoder-7B-V1.5
Code GenerationAPPSInterview Pass@132.63MoTCoder-7B-V1.5
Code GenerationAPPSIntroductory Pass@154.26MoTCoder-7B-V1.5
Code GenerationCodeContestsTest Set pass@126.34MoTCoder-15B
Code GenerationCodeContestsVal Set pass@120.35MoTCoder-15B
Code GenerationCodeContestsTest Set pass@120.77MoTCoder-7B-v1.5
Code GenerationCodeContestsVal Set pass@116.72MoTCoder-7B-v1.5

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks2025-07-16Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training2025-07-16The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs2025-07-15Kodezi Chronos: A Debugging-First Language Model for Repository-Scale, Memory-Driven Code Understanding2025-07-14CodeJudgeBench: Benchmarking LLM-as-a-Judge for Coding Tasks2025-07-14CodeAssistBench (CAB): Dataset & Benchmarking for Multi-turn Chat-Based Code Assistance2025-07-14