Mini RNN

grade 7: mini RNN

Following we will implement a simple version of RNN. And we will go back to CNN on more details later. : )

While CNN overcomes the issue of computational complexity over traditional full connection networks, it has its own shortcoming, it has no memory on previous events.

For instance, if a CNN robot sitting in the cinema watching a movie, most likely it won’t understand what’s going on and what’s the story, unlike human, the CNN won’t memorize previous scenes to help understand what’s happening and what will be in the near future.

With saying that, a RNN is not that mysterious, still it could be looked as a CNN, in some way.

It’s understandable for human beings to predict what’s next in a say hi dialog scenario, so do RNN. It’s difficult for people to inference the ending of a story book by using the information gave from the beginning, so do RNN. With the “refined” RNN, the Long Short Term Memory networks - LSTMs is aim to resolve this.

[post status: in writing]