MMNeedle
Multimodal Needle in a Haystack
ImagesTextsCC BY 4.0Introduced 2024-06-17
We introduce the MultiModal Needle-in-a-haystack (MMNeedle) benchmark, specifically designed to assess the long-context capabilities of MLLMs. Besides multi-image input, we employ image stitching to further increase the input context length, and develop a protocol to automatically generate labels for sub-image level retrieval. Essentially, MMNeedle evaluates MLLMs by stress-testing their capability to locate a target sub-image (needle) within a set of images (haystack) based on textual instructions and descriptions of image contents. This setup necessitates an advanced understanding of extensive visual contexts and effective information retrieval within long-context image inputs.
Benchmarks
Long-Context Understanding/1 Image, 4*4 Stitching, Exact AccuracyLong-Context Understanding/1 Image, 8*8 Stitching, Exact AccuracyLong-Context Understanding/1 Image, 2*2 Stitching, Exact AccuracyLong-Context Understanding/10 Images, 1*1 Stitching, Exact AccuracyLong-Context Understanding/10 Images, 2*2 Stitching, Exact AccuracyLong-Context Understanding/10 Images, 4*4 Stitching, Exact AccuracyLong-Context Understanding/10 Images, 8*8 Stitching, Exact Accuracy