TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets/PIQ23

PIQ23

ImagesProvided that the User complies with the Terms of Use, the Provider grants a limited, non-exclusive, personal, non-transferable, non-sublicensable, and revocable license to access, download and use the Database for internal and research purposes only, during the specified term. The User is required to comply with the Provider's reasonable instructions, as well as all applicable statutes, laws, and regulations.Introduced 2023-04-12

Year after year, the demand for ever-better smartphone photos continues to grow, in particular in the domain of portrait photography. Manufacturers thus use perceptual quality criteria throughout the development of smartphone cameras. This costly procedure can be partially replaced by automated learning-based methods for image quality assessment (IQA). Due to its subjective nature, it is necessary to estimate and guarantee the consistency of the IQA process, a characteristic lacking in the mean opinion scores (MOS) widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA) datasets pay little attention to the difficulty of cross-content assessment, which may degrade the quality of annotations. This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 predefined scenarios acquired by 100 smartphones, covering a high variety of brands, models, and use cases. The dataset includes individuals of various genders and ethnicities who have given explicit and informed consent for their photographs to be used in public research. It is annotated by pairwise comparisons (PWC) collected from over 30 image quality experts for three image attributes: face detail preservation, face target exposure, and overall image quality. An in-depth statistical analysis of these annotations allows us to evaluate their consistency over PIQ23. Finally, we show through an extensive comparison with existing baselines that semantic information (image context) can be used to improve IQA predictions. The dataset along with the proposed statistical analysis and BIQA algorithms are available: https://github.com/DXOMARKResearch/PIQ2023

Benchmarks

3D/SRCC3D/PLCC3D/KRCC3D/MAE3D Face Modelling/SRCC3D Face Modelling/PLCC3D Face Modelling/KRCC3D Face Modelling/MAE3D Face Reconstruction/SRCC3D Face Reconstruction/PLCC3D Face Reconstruction/KRCC3D Face Reconstruction/MAEFace Recognition/SRCCFace Recognition/PLCCFace Recognition/KRCCFace Recognition/MAEFace Reconstruction/SRCCFace Reconstruction/PLCCFace Reconstruction/KRCCFace Reconstruction/MAEFacial Recognition and Modelling/SRCCFacial Recognition and Modelling/PLCCFacial Recognition and Modelling/KRCCFacial Recognition and Modelling/MAE

Statistics

Papers
5
Benchmarks
24

Links

Homepage

Tasks

3D3D Face Modelling3D Face ReconstructionBlind Image Quality AssessmentFace Image Quality AssessmentFace RecognitionFace ReconstructionFacial Recognition and ModellingImage Quality Assessment