Metric: Top-1 Accuracy (higher is better)
| # | Model↕ | Top-1 Accuracy▼ | Augmentations | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | Resnet-50: 80% Sparse | 77.1 | No | Rigging the Lottery: Making All Tickets Winners | 2019-11-25 | Code |
| 2 | Resnet-50: 90% Sparse | 76.4 | No | Rigging the Lottery: Making All Tickets Winners | 2019-11-25 | Code |
| 3 | Resnet-50: 80% Sparse 100 epochs | 76 | No | Sparse Training via Boosting Pruning Plasticity ... | 2021-06-19 | Code |
| 4 | Resnet-50: 80% Sparse 100 epochs | 75.84 | No | Do We Actually Need Dense Over-Parameterization?... | 2021-02-04 | Code |
| 5 | Resnet-50: 90% Sparse 100 epochs | 74.5 | No | Sparse Training via Boosting Pruning Plasticity ... | 2021-06-19 | Code |
| 6 | Resnet-50: 90% Sparse 100 epochs | 73.82 | No | Do We Actually Need Dense Over-Parameterization?... | 2021-02-04 | Code |
| 7 | MobileNet-v1: 75% Sparse | 71.9 | No | Rigging the Lottery: Making All Tickets Winners | 2019-11-25 | Code |
| 8 | MobileNet-v1: 90% Sparse | 68.1 | No | Rigging the Lottery: Making All Tickets Winners | 2019-11-25 | Code |
| 9 | SINDy | 6 | No | Sparse learning of stochastic dynamic equations | 2017-12-06 | Code |