TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Speech/Keyword Spotting/QUESST

Keyword Spotting on QUESST

Metric: MinCnxe (higher is better)

LeaderboardDataset

Results

Submit a result
#Model↕MinCnxe▼Extra DataPaperDate↕Code
1CUNY [SMO+iSAX] (dev)0.9872No---
2CUNY [SMO+iSAX] (eval)0.987No---
3CUNY [Subseq+MFCC] (eval)0.9853No---
4CUNY [Subseq+MFCC] (dev)0.9823No---
5NNI Choi(for the development set)0.9595No---
6NNI non-filtered(for the development set)0.9571No---
7TUKE g-U late submission (eval)0.954No---
8TUKE p-S (eval)0.953No---
9TUKE g-U (eval)0.953No---
10TUKE g-U (dev)0.953No---
11TUKE g-U late submission (dev)0.951No---
12TUKE p-S (dev)0.947No---
13TUKE p-S late submission (dev)0.94No---
14TUKE p-S late submission (eval)0.94No---
15IIT-B (eval)0.9364No---
16TUKE g-zero(for the development set)0.934No---
17ELiRF SDTW (eval)0.9338No---
18BUT (g-LID)0.929No---
19GTM-UVigo Contrastive (eval)0.923No--Code
20TUKE g-zero late submission(for the development set)0.922No---
21GTM-UVigo Contrastive (dev)0.918No--Code
22IIT-B (dev)0.9082No---
23GTM-UVigo Primary (dev)0.905No--Code
24GTM-UVigo Primary (eval)0.905No--Code
25TUKE p-low(for the development set)0.892No---
26BUT [l-fea stack DTW 3w+slope] (dev)0.8801No---
27ELiRF SDTW-avg (eval)0.8751No---
28ELiRF SDTW (dev)0.8702No---
29ELiRF SDTW-avg (dev)0.8677No---
30GTM-UVigo Contrastive late submission (dev)0.864No--Code
31BUT [l-fea stack DTW 2w+slope] (dev)0.8569No---
32TUKE p-low late submission (for the development set)0.854No---
33GTM-UVigo Contrastive late submission (eval)0.852No--Code
34GTM-UVigo Primary late submission (dev)0.847No--Code
35BUT [p-fea stack DTW ] (dev)0.8426No---
36BUT [l-fea stack DTW+slope] (dev)0.8389No---
37GTM-UVigo Primary late submission (eval)0.838No--Code
38BUT [l-fea stack DTW+slope+2w3w fusion] (dev)0.8321No---
39BUT [p-fea stack DTW ] (eval)0.8263No---
40BUT [l-fea stack DTW+slope] (eval)0.8184No---
41BUT [l-fea stack DTW+slope+2w3w fusion] (eval)0.8124No---
42NS-DTW(for the development set, all the queries)0.807No---
43SPL-IT-UC [Hmean, no side] (dev)0.7893No---
44SPL-IT-UC [All, no side] (eval)0.7875No---
45SPL-IT-UC [Hmean, no side] (eval)0.7856No---
46SPL-IT-UC [All, no side] (dev)0.7816No---
47SPL-IT-UC [All + side-info] (eval)0.7809No---
48SPL-IT-UC [Hmean + side-info] (dev)0.78No---
49SPL-IT-UC [Hmean + side-info] (eval)0.7786No---
50SPL-IT-UC [All + side-info] (dev)0.7716No---
51NNI (dev)0.757No---
52NNI (eval)0.747No---
53NNI Symbolic(All Queries)0.7293No---
54NNI DTW(All Queries)0.6816No---
55BUT (AKWS-T3-cz)0.673No---
56CUHK(all the queries) System №10.659No---
57BUT (AKWS-cz)0.641No---
58GTTS-EHU p (for the development set)0.6353No---
59ELiRF Fusion(All Queries)0.6062No---
60ELiRF Fusion+Length(All Queries)0.5977No---
61SPL-IT(for the development set)0.5881No---
62CUHK(all the queries) System №20.585No---
63BUT (g-best_single)0.533No---
64BUT (g-bigfusionnoside )0.486No---
65BUT (p-bigfusion)0.461No---