Description
MATE is a Transformer architecture designed to model the structure of web tables. It uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. Each attention head reorders the tokens by either column or row index and then applies a windowed attention mechanism. Unlike traditional self-attention, Mate scales linearly in the sequence length.
Papers Using This Method
MATE: LLM-Powered Multi-Agent Translation Environment for Accessibility Applications2025-06-24SMART-PC: Skeletal Model Adaptation for Robust Test-Time Training in Point Clouds2025-05-26Vision-Language Models Struggle to Align Entities across Modalities2025-03-05LinGen: Towards High-Resolution Minute-Length Text-to-Video Generation with Linear Computational Complexity2024-12-13Explore the Reasoning Capability of LLMs in the Chess Testbed2024-11-11MATE: Meet At The Embedding -- Connecting Images with Long Texts2024-06-26PTA: Enhancing Multimodal Sentiment Analysis through Pipelined Prediction and Translation-based Alignment2024-05-23The Invalsi Benchmarks: measuring Linguistic and Mathematical understanding of Large Language Models in Italian2024-03-27Masked Audio Text Encoders are Effective Multi-Modal Rescorers2023-05-11CCDN: Checkerboard Corner Detection Network for Robust Camera Calibration2023-02-10MATE: Masked Autoencoders are Online 3D Test-Time Learners2022-11-21Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning2022-07-05A Meta-Learning Approach for Training Explainable Graph Neural Networks2021-09-20MATE: Multi-view Attention for Table Transformer Efficiency2021-09-09