parameter-efficient fine-tuning on HellaSwag
Metric: Accuracy (% ) (higher is better)
LeaderboardDataset
Loading chart...
Results
Submit a result| # | Model↕ | Accuracy (% )▼ | Augmentations | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | LLaMA2-7b | 76.68 | Yes | GIFT-SW: Gaussian noise Injected Fine-Tuning of ... | 2024-08-27 | Code |
| 2 | LLaMA2-7b | 76.67 | Yes | LoRA: Low-Rank Adaptation of Large Language Models | 2021-06-17 | Code |
| 3 | LLaMA2-7b | 76.27 | Yes | DoRA: Weight-Decomposed Low-Rank Adaptation | 2024-02-14 | Code |