TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Spider 2.0: Evaluating Language Models on Real-World Enter...

Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows

Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, Victor Zhong, Caiming Xiong, Ruoxi Sun, Qian Liu, Sida Wang, Tao Yu

2024-11-12Text-To-SQLCode Generation
PaperPDF

Abstract

Real-world enterprise text-to-SQL workflows often involve complex cloud or local data across various database systems, multiple SQL queries in various dialects, and diverse operations from data transformation to analytics. We introduce Spider 2.0, an evaluation framework comprising 632 real-world text-to-SQL workflow problems derived from enterprise-level database use cases. The databases in Spider 2.0 are sourced from real data applications, often containing over 1,000 columns and stored in local or cloud database systems such as BigQuery and Snowflake. We show that solving problems in Spider 2.0 frequently requires understanding and searching through database metadata, dialect documentation, and even project-level codebases. This challenge calls for models to interact with complex SQL workflow environments, process extremely long contexts, perform intricate reasoning, and generate multiple SQL queries with diverse operations, often exceeding 100 lines, which goes far beyond traditional text-to-SQL challenges. Our evaluations indicate that based on o1-preview, our code agent framework successfully solves only 21.3% of the tasks, compared with 91.2% on Spider 1.0 and 73.0% on BIRD. Our results on Spider 2.0 show that while language models have demonstrated remarkable performance in code generation -- especially in prior text-to-SQL benchmarks -- they require significant improvement in order to achieve adequate performance for real-world enterprise usage. Progress on Spider 2.0 represents crucial steps towards developing intelligent, autonomous, code agents for real-world enterprise settings. Our code, baseline models, and data are available at https://spider2-sql.github.io

Results

TaskDatasetMetricValueModel
Semantic ParsingSpider 2.0Success Rate17.03Spider-Agent + o1-preview
Semantic ParsingSpider 2.0Success Rate10.13Spider-Agent + GPT-4o
Semantic ParsingSpider 2.0Success Rate9.02Spider-Agent + Claude-3.5-Sonnect
Semantic ParsingSpider 2.0Success Rate8.86Spider-Agent + GPT-4
Semantic ParsingSpider 2.0Success Rate6.17Spider-Agent + Qwen2.5-72B
Semantic ParsingSpider 2.0Success Rate5.22Spider-Agent + DeepSeek-V2.5
Semantic ParsingSpider 2.0Success Rate2.53Spider-Agent + Gemini-Pro-1.5
Semantic ParsingSpider 2.0Success Rate2.21Spider-Agent + Llama-3.1-405B
Text-To-SQLSpider 2.0Success Rate17.03Spider-Agent + o1-preview
Text-To-SQLSpider 2.0Success Rate10.13Spider-Agent + GPT-4o
Text-To-SQLSpider 2.0Success Rate9.02Spider-Agent + Claude-3.5-Sonnect
Text-To-SQLSpider 2.0Success Rate8.86Spider-Agent + GPT-4
Text-To-SQLSpider 2.0Success Rate6.17Spider-Agent + Qwen2.5-72B
Text-To-SQLSpider 2.0Success Rate5.22Spider-Agent + DeepSeek-V2.5
Text-To-SQLSpider 2.0Success Rate2.53Spider-Agent + Gemini-Pro-1.5
Text-To-SQLSpider 2.0Success Rate2.21Spider-Agent + Llama-3.1-405B

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks2025-07-16Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training2025-07-16The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs2025-07-15Kodezi Chronos: A Debugging-First Language Model for Repository-Scale, Memory-Driven Code Understanding2025-07-14CodeJudgeBench: Benchmarking LLM-as-a-Judge for Coding Tasks2025-07-14CodeAssistBench (CAB): Dataset & Benchmarking for Multi-turn Chat-Based Code Assistance2025-07-14