TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets/MSU NR VQA Database

MSU NR VQA Database

MSU No-Reference Video Quality Assessment Database

ImagesVideosIntroduced 2022-11-22

The dataset was created for video quality assessment problem. It was formed with 36 clips from Vimeo, which were selected from 18,000+ open-source clips with high bitrate (license CCBY or CC0).

The clips include videos recorded by both professionals and amateurs. Almost half of the videos contain scene changes and high dynamism. Moreover, the synthetic to natural lightning ratio is approximately 1 to 3.

  • Content type: nature, sport, humans close up, gameplays, music videos, water stream or steam, CGI
  • Effects and distortions: shaking, slow-motion, grain/noisy, too dark/bright regions, macro shooting, captions (text), extraneous objects on the camera lens or just close to it
  • Resolution: 1920x1080 as the most popular modern video resolution (more in the future)
  • Format: yuv420p
  • FPS: 24, 25, 30, 39, 50, 60
  • Videos duration: mainly 10 seconds

Such content diversity helps simulate near-realistic conditions. The choice of videos collected for the benchmark dataset employed clustering in terms of space-time complexity to obtain a representative distribution.

For compression we used 40 codecs of 10 compression standards (H.264, AV1, H.265, VVC, etc.). Each video was compressed with 3 target bitrates: 1,000 Kbps, 2,000 Kbps, and 4,000 Kbps, and different real-life encoding modes: constant quality (CRF) and variable bitrate (VBR). The choice of bitrate range simplifies the subjective comparison procedure since the video quality is more difficult to distinguish visually at higher bitrates.

The subjective assessment involved pairwise comparisons using crowdsourcing service Subjectify.us. To increase the relevance of the results, each pair of videos received at least 10 responses from participants. In total, 766362 valid answers were collected from more than 10800 unique participants.

Benchmarks

Image Quality Assessment/SRCCImage Quality Assessment/PLCCImage Quality Assessment/KLCCVideo/SRCCVideo/PLCCVideo/KLCCVideo/TypeVideo Quality Assessment/SRCCVideo Quality Assessment/PLCCVideo Quality Assessment/KLCCVideo Quality Assessment/TypeVideo Understanding/SRCCVideo Understanding/PLCCVideo Understanding/KLCCVideo Understanding/Type

Statistics

Papers
20
Benchmarks
15

Links

Homepage

Tasks

Blind Image Quality AssessmentImage Quality AssessmentNo-Reference Image Quality AssessmentVideoVideo Quality AssessmentVideo Understanding