TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Methodology/Logical Reasoning/BIG-bench (Penguins In A Table)

Logical Reasoning on BIG-bench (Penguins In A Table)

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼AugmentationsPaperDate↕Code
1PaLM 2 (few-shot, k=3, CoT)84.9NoPaLM 2 Technical Report2023-05-17Code
2PaLM 2 (few-shot, k=3, Direct)65.8NoPaLM 2 Technical Report2023-05-17Code
3Chinchilla-70B (few-shot, k=5)48.7NoTraining Compute-Optimal Large Language Models2022-03-29Code
4PaLM 540B (few-shot, k=3)44.5NoBloombergGPT: A Large Language Model for Finance2023-03-30Code
5Gopher-280B (few-shot, k=5)40.6NoScaling Language Models: Methods, Analysis & Ins...2021-12-08Code
6BLOOM 176B (few-shot, k=3)40.41NoBloombergGPT: A Large Language Model for Finance2023-03-30Code
7Bloomberg GPT (few-shot, k=3)37.67NoBloombergGPT: A Large Language Model for Finance2023-03-30Code
8GPT-NeoX (few-shot, k=3)33.56NoBloombergGPT: A Large Language Model for Finance2023-03-30Code
9OPT 66B (few-shot, k=3)28.08NoBloombergGPT: A Large Language Model for Finance2023-03-30Code