SimAdapter is a module for explicitly learning knowledge from adapters. SimAdapter aims to learn the similarities between the source and target languages during fine-tuning using the adapters, and the similarity is based on an attention mechanism.
The detailed composition of the SimAdapter is shown in the Figure. By taking the language-agnostic representations from the backbone model as the query, and the language-specific outputs from multiple adapter as the keys and values, the final output for SimAdapter over attention are computed as (For notation simplicity, we omit the layer index below):
\operatorname{SimAdapter}\left(\mathbf{z}, \mathbf{a}\_{\left\(S\_{1}, S\_{2}, \ldots, S\_{N}\right\)}\right)=\sum_{i=1}^{N} \operatorname{Attn}\left(\mathbf{z}, \mathbf{a}\_{S\_{i}}\right) \cdot\left(\mathbf{a}\_{S\_{i}} \mathbf{W}\_{V}\right)where SimAdapter and denotes the SimAdapter and attention operations, respectively. Specifically, the attention operation is computed as:
where is the temperature coefficient, are attention matrices. Note that while are initialized randomly, is initialized with a diagonal of ones and the rest of the matrix with small weights to retain the adapter representations. Furthermore, a regularization term is introduced to avoid drastic feature changes:
where is the identity matrix with the same size as