TTTTTackling WinoGrande Schemas
Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin
Abstract
We applied the T5 sequence-to-sequence model to tackle the AI2 WinoGrande Challenge by decomposing each example into two input text strings, each containing a hypothesis, and using the probabilities assigned to the "entailment" token as a score of the hypothesis. Our first (and only) submission to the official leaderboard yielded 0.7673 AUC on March 13, 2020, which is the best known result at this time and beats the previous state of the art by over five points.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 84.6 | TTTTT 3B (fine-tuned) |
Related Papers
CORE-KG: An LLM-Driven Knowledge Graph Construction Framework for Human Smuggling Networks2025-06-20Disambiguating Reference in Visually Grounded Dialogues through Joint Modeling of Textual and Multimodal Semantic Structures2025-05-16Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma22025-05-09Multimodal Coreference Resolution for Chinese Social Media Dialogues: Dataset and Benchmark Approach2025-04-19Long-context Non-factoid Question Answering in Indic Languages2025-04-18RAKG:Document-level Retrieval Augmented Knowledge Graph Construction2025-04-14Cross-Document Contextual Coreference Resolution in Knowledge Graphs2025-04-08More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment2025-04-03