Satya Krishna Gorti, Ilan Gofman, Zhaoyan Liu, Jiapeng Wu, Noël Vouitsis, Guangwei Yu, Jesse C. Cresswell, Rasa Hosseinzadeh
Text-to-SQL generation enables non-experts to interact with databases via natural language. Recent advances rely on large closed-source models like GPT-4 that present challenges in accessibility, privacy, and latency. To address these issues, we focus on developing small, efficient, and open-source text-to-SQL models. We demonstrate the benefits of sampling multiple candidate SQL generations and propose our method, MSc-SQL, to critique them using associated metadata. Our sample critiquing model evaluates multiple outputs simultaneously, achieving state-of-the-art performance compared to other open-source models while remaining competitive with larger models at a much lower cost. Full code can be found at github.com/layer6ai-labs/msc-sql.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Semantic Parsing | spider | Execution Accuracy (Test) | 84.7 | MSc-SQL |
| Semantic Parsing | BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) | Execution Accuracy % (Dev) | 65.6 | MSc-SQL |
| Text-To-SQL | spider | Execution Accuracy (Test) | 84.7 | MSc-SQL |
| Text-To-SQL | BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) | Execution Accuracy % (Dev) | 65.6 | MSc-SQL |