Harada-Osa-Kurose-Mukuta Lab.

Spiking Neural Networks

Spiking Neural Networks

In recent years, neural network models have become huge and the cost of learning and inference has become larger and larger. Therefore, lowering the computational complexity of neural networks has become an important issue. Spiking Neural Networks is a method of computing neural network models on a neuron chip that mimics the biological brain. While GPUs for general neural network computation perform computation using continuous float values, neuron chips perform computation using sparse binary signals, so it is expected that recognition can be performed with a lower computational complexity than conventional methods. However, neuron chips have the problems of low expressive power and difficulty in learning due to the fact that they handle discrete binary signals. The challenge is how to solve these problems and build a recognition model that takes advantage of the capabilities of neuron chips.

Hardware implementations of spiking neuron models are being actively developed by large companies such as Intel Loihi and IBM TrueNorth. Since Spiking Neural Networks deal with binary signals and the usual back propagation cannot be applied, various learning methods have been studied. The ANN-SNN conversion-based learning method starts by learning a standard continuous-valued neural network and then converts it to Spiking Neural Networks. Although the learning process is stable because it goes through a standard neural network, the time step required to recognize using the converted Spiking Neural Networks is long. However, the time required for recognition of the converted Spiking Neural Networks tends to be long. The method of directly training Spiking Neural Networks has the advantage of being able to train compact models, but it has difficulty in stabilizing the training. There has been a lot of research on importing the stabilization methods for continuous neural networks into Spiking Neural Networks to stabilize the learning process.

Uniqueness and Achievements of this Laboratory

Generative Model using Spiking Neural Networks [Kamata+, AAAI 2022]

Generative Model using Spiking Neural Networks [Kamata+, AAAI 2022]

In [Kamata+, AAAI 2022], we proposed a method for image generation with Spiking Neural Networks. In contrast to ordinary image recognition, which outputs discrete semantic information of an image, image generation is a more difficult task because it requires outputting the continuous pixel value of each pixel. Existing methods have had problems with outputting fine images and requiring some continuous modules.

In [Kamata+, AAAI 2022], we extended one of the generative models, Varietional Autoencoder, discretely to generate handwritten characters and face images using Spiking Neural Networks. The proposed model is constructed using only discrete signals by using another Spiking Neural Networks for latent variable distribution and maximum mean discrepancy for inter-distribution distance calculation. The proposed model showed better performance than a model of the same size constructed with a normal continuous neural network.

Future Directions

High-performance and efficient recognition models by combining Spiking Neural Networks with the latest high-performance generative and large-scale recognition models are expected. For this purpose, it is desirable to construct a stable learning method.

Reference

  1. Hiromichi Kamata, Yusuke Mukuta, Tatsuya Harada, “Fully Spiking Variational Autoencoder”, In the 37th AAAI Conference on Artificial Intelligence (AAAI 2022), 2022.