Abstract
TabPFN [Hollmann et al., 2023], a Transformer model pretrained to perform in-context learning on fresh tabular classification problems, was presented at the last ICLR conference. To better understand its behavior, we treat it as a black-box function approximator generator and observe its generated function approximations on a varied selection of training datasets. Exploring its learned inductive biases in this manner, we observe behavior that is at turns either brilliant or baffling. We conclude this post with thoughts on how these results might inform the development, evaluation, and application of prior-data fitted networks (PFNs) in the future.
Related Papers
Learning Interpretable Differentiable Logic Networks for Tabular Regression2025-05-29Test-Time Training Provably Improves Transformers as In-context Learners2025-03-14TabNSA: Native Sparse Attention for Efficient Tabular Data Learning2025-03-12Prior-Fitted Networks Scale to Larger Datasets When Treated as Weak Learners2025-03-03TabMixer: advancing tabular data analysis with an enhanced MLP-mixer approach2025-02-21JoLT: Joint Probabilistic Predictions on Tabular Data Using LLMs2025-02-17TabPFN Unleashed: A Scalable and Effective Solution to Tabular Classification Problems2025-02-04Efficiency Bottlenecks of Convolutional Kolmogorov-Arnold Networks: A Comprehensive Scrutiny with ImageNet, AlexNet, LeNet and Tabular Classification2025-01-27