Swetha Mandava, Szymon Migacz, Alex Fit Florea
Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Language Modelling | WikiText-103 | Test perplexity | 18.4 | PAR Transformer Large |
| Language Modelling | WikiText-103 | Test perplexity | 22.7 | PAR Transformer Base |
| Language Modelling | Text8 | Bit per Character (BPC) | 1.18 | PAR Transformer 24B |
| Language Modelling | enwiki8 | Bit per Character (BPC) | 1.11 | PAR Transformer 24B |
| Sentiment Analysis | SST-2 Binary classification | Accuracy | 91.6 | PAR BERT Base |