CIFAR-10N
Real-World Human Annotations
Introduced 2021-10-22
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N), equipping the training dataset of CIFAR-10 and CIFAR-100 with human-annotated real-world noisy labels that we collect from Amazon Mechanical Turk.
Related Benchmarks
CIFAR-10N-Aggregate/Document Text Classification/Accuracy (mean)CIFAR-10N-Aggregate/Image Classification/Accuracy (mean)CIFAR-10N-Random1/Document Text Classification/Accuracy (mean)CIFAR-10N-Random1/Image Classification/Accuracy (mean)CIFAR-10N-Random2/Document Text Classification/Accuracy (mean)CIFAR-10N-Random2/Image Classification/Accuracy (mean)CIFAR-10N-Random3/Document Text Classification/Accuracy (mean)CIFAR-10N-Random3/Image Classification/Accuracy (mean)CIFAR-10N-Worst/Document Text Classification/Accuracy (mean)CIFAR-10N-Worst/Image Classification/Accuracy (mean)