Pedro Savarese, Michael Maire
We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy. Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops. Training these networks thus implicitly involves discovery of suitable recurrent architectures. Though considering only the design aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search (NAS) procedures. Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Neural Architecture Search | CIFAR-10 Image Classification | Params | 33.5 | Soft Parameter Sharing |
| Neural Architecture Search | CIFAR-10 Image Classification | Percentage error | 2.53 | Soft Parameter Sharing |
| Neural Architecture Search | CIFAR-10 | Search Time (GPU days) | 0.7 | Soft Parameter Sharing |
| Image Classification | CIFAR-10 | Percentage correct | 97.47 | Shared WRN |
| Image Classification | CIFAR-100 | Percentage correct | 82.57 | Shared WRN |
| AutoML | CIFAR-10 Image Classification | Params | 33.5 | Soft Parameter Sharing |
| AutoML | CIFAR-10 Image Classification | Percentage error | 2.53 | Soft Parameter Sharing |
| AutoML | CIFAR-10 | Search Time (GPU days) | 0.7 | Soft Parameter Sharing |