A lightweight Max-Pooling method and architecture for Deep Spiking Convolutional Neural Networks

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2020 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), 2020, 00, pp. 209-212
Issue Date:
2020-12-29
Full metadata record
The training of Deep Spiking Neural Networks (DSNNs) is facing many challenges due to the non-differentiable nature of spikes. The conversion of a traditional Deep Neural Networks (DNNs) to its DSNNs counterpart is currently one of the prominent solutions, as it leverages many state-of-the-art pre-trained models and training techniques. However, the conversion of max-pooling layer is a non-trivia task. The state-of-the-art conversion methods either replace the max-pooling layer with other pooling mechanisms or use a max-pooling method based on the cumulative number of output spikes. This incurs both memory storage overhead and increases computational complexity, as one inference in DSNNs requires many timesteps, and the number of output spikes after each layer needs to be accumulated. In this paper1, we propose a novel max-pooling mechanism that is not based on the number of output spikes but is based on the membrane potential of the spiking neurons. Simulation results show that our approach still preserves classification accuracies on MNIST and CIFARIO dataset. Hardware implementation results show that our proposed hardware block is lightweight with an area cost of 15.3kGEs, at a maximum frequency of 300 MHz.
Please use this identifier to cite or link to this item: