LSTM Shortcuts in Convolutional Neural Network

Last week, when I was reading a tutorial about handwriting recognition, I got a new idea that convolution channels may have sequential features which might obviously affect the classification result especially in character classification problem. This thought came from the association of human observing behavior. Like our sight moves in a coherent path and get information along with the focus transferring, the connections between convolutional and fully-connected layers in this research are filled with LSTM branches like a scanner. So my goal was to test these branches’ influences to the classification result.

From the beginning of my research, I searched information about LSTM with convolution on Google. There were many contributors who have worked with them and many networks based on LSTM and convolution like EEGNet – a network for electroencephalogram classification. So I shifted my focus on variations and other types of networks. 

Eventually, I designed a networks with LSTM layer connected between convolutional layers and fully-connected layers. From now, I have completed 5 types of networks with different number of LSTM layers and parameters.

I have already done experiments on architecture A, B, D and other variations based on type D. The code was implemented in TensorFlow and I have trained on Tesla K80 with cuDNN 7.0.

Github

Share This Page:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.