Description
TabTransformer is a deep tabular data modeling architecture for supervised and semi-supervised learning. The TabTransformer is built upon self-attention based Transformers. The Transformer layers transform the embeddings of categorical features into robust contextual embeddings to achieve higher prediction accuracy.
As an overview, the architecture comprises a column embedding layer, a stack of Transformer layers, and a multi-layer perceptron (MLP). The contextual embeddings (outputted by the Transformer layer) are concatenated along with continuous features which is inputted to an MLP. The loss function is then minimized to learn all the parameters in an end-to-end learning.
Papers Using This Method
A Robust PPO-optimized Tabular Transformer Framework for Intrusion Detection in Industrial IoT Systems2025-05-23DeepOFormer: Deep Operator Learning with Domain-informed Features for Fatigue Life Prediction2025-03-28Application of Tabular Transformer Architectures for Operating System Fingerprinting2025-02-13A Survey on Deep Tabular Learning2024-10-15Gradient Boosting Decision Trees on Medical Diagnosis over Tabular Data2024-09-25Towards a Transformer-Based Pre-trained Model for IoT Traffic Classification2024-07-26Deep Learning with Tabular Data: A Self-supervised Approach2024-01-26The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling2022-01-01TabTransformer: Tabular Data Modeling Using Contextual Embeddings2020-12-11