NAP: Neural architecture search with pruning
- Publisher:
- ELSEVIER
- Publication Type:
- Journal Article
- Citation:
- Neurocomputing, 2022, 477, pp. 85-95
- Issue Date:
- 2022-03-07
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
NAP Neural architecture search with pruning.pdf | Published version | 1.27 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
There has been continuously increasing attention attracted by Neural Architecture Search (NAS). Due to its computational efficiency, gradient-based NAS methods like DARTS have become the most popular framework for NAS tasks. Nevertheless, as the search iterates, the derived model in previous NAS frameworks becomes dominated by skip-connects, causing the performance downfall. In this work, we present a novel approach to alleviate this issue, named Neural Architecture search with Pruning (NAP). Unlike prior differentiable architecture search works, our approach draws the idea from network pruning. We first train an over-parameterized network, including all candidate operations. Then we propose a criterion to prune the network. Based on a newly designed relaxation of architecture representation, NAP can derive the most potent model by removing trivial and redundant edges from the whole network topology. Experiments show the effectiveness of our proposed approach. Specifically, the model searched by NAP achieves state-of-the-art performances (2.48% test error) on CIFAR-10. We transfer the model to ImageNet and obtains a 25.1% test error with only 5.0 M parameters, which is on par with modern NAS methods.
Please use this identifier to cite or link to this item: