Deep Learning

Deep Learning – Introduction to Recurrent Neural Networks

Recurrent Neural Networks – Main use of RNNs are when using google or facebook these interfaces are able to predict next word what you are about to type. RNNs have loops to allow information to persist. RNN’s are considered to be fairly good for modelling sequence data. Recurrent neural networks are a linear architectural variant of recursive networks.

This post is a high-level overview for creating basic understanding. Don’t expect too much if you are PHD or master degree student. We will only focus on the intuition behind RNNs instead. This post will give you a little comfort to start digging deeper in RNNs.

 

https://vinodsblog.com/2018/10/25/deep-learning-introduction-to-artificial-neural-networks/

 

 

Artificial Neural Networks – What Is It

In 1943, McCulloch and Pitts designed the first neural network. Artificial neural networks were modelled on a simplified version of human brain neurons.

Deep Learning – Introduction to Recurrent Neural NetworksAs per wiki “Recurrent neural network is a class of artificial neural network where connections between nodes form a directed graph along a sequence.” This allows it to exhibit temporal dynamic behaviour for a time sequence.

There are several kinds of Neural Networks in deep learning.

As per AILabPage Artificial neural networks (ANNs) are- “A complex computer code written with a number of simple, highly interconnected processing elements which is inspired by human biological brain structure for simulating human brain working & processing data (Information) models”.

Point to note artificial neural networks are way different than computer program though so please don’t get the wrong perception from the above definition. Neural networks consist of input and output layers and at least one hidden layer.

Training neural networks can be hard, complex and time-consuming. The reasons are simply known to data scientists. One of the major reason for this hardship is “Weights”. In Neural networks, weights are highly interdependent with hidden layers. The three main steps to train neural networks are

  1. Forward pass and makes a prediction.
  2. Compare prediction to the ground truth using a loss function.
  3. Error value to do backpropagation

The algorithm to train ANNs depends on two basic concepts first to reduce the sum squared error to an acceptable value and second to have reliable data to train network under supervision.

 

Recurrent Neural Networks- Introduction

Recurrent neural networks are not a too old neural network, they were developed in the 1980s.

  • RNNs takes input as time series and provide an output as time series,
  • They have at least one connection cycle.

One of the biggest uniqueness RNNs have is “UAP- Universal Approximation Property” thus they can approximate virtually any dynamical system. This unique property forces us to say recurrent neural networks have something magical about them.

Deep Learning – Introduction to Recurrent Neural Networks

There is a strong perception of recurrent neural networks training part. The training is assumed as super complex, difficult, expensive and time-consuming.  Matter of fact after a few times being hands-on in our lab; our response is just the opposite. So common wisdom is completely opposite then reality. The power of robustness and scalability of RNNs is super exciting compared to traditional neural networks and even convolutional neural networks.

Recurrent Neural Networks are way special as compared to other neural networks.  Non-RNNs API has too much constrained and limitations (Some time RNNs also does the same though).  Non-RNNs API take

  • Input – Fixed size vector: For example an “image” or a “character”
  • Output – Fixed size vector: Probability matrix
  • Size of Neuron – Fixed number of layers / computational steps

We need to answer “What kind of problems can be solved with “Recurrent Neural Networks”? before we go any deeper in this.

 

Real life examples – Recurrent Neural Networks

When we deal with RNNs they show excellent and dynamic ability to deal with various input and output types. Before we go deeper let’s have below real-life examples.

  • Varying Inputs & Fixed Outputs – Speech, text recognition & sentiment classification –  In today’s time this can be the biggest relief for a bomb like social media to kick out negative comments. People who like to give only negative comments for anything and everything rather than helping as they have one motive PHD (Pull him/her down) someone’s efforts. Classifying tweets/FB comments into positive and negative sentiment becomes easy here. Inputs with varying lengths, while the output is of a fixed length.
  • Fixed Inputs & Varying Outputs – Image recognition (Captioning) – This is to describe the content in an image. Images as a single input but caption can be series or sequence of words as output. Kid riding bike, children playing park, young girls playing football or two girls dancing etc.
  • Varying Inputs & Varying Outputs – Machine Translation – Language translation: Translating one language to another can be a tedious task for humans is done word by word from the dictionary but thanks for this amazing tool from google online translation to full text. This tool is so powerful which takes care of sentiments in each language, length and meanings with context. This is the case of varying inputs as well as varying outputs.

As seen above cases RNNs are used for mapping inputs to outputs of varying types, lengths. The underlying foundation for RNNs are always generalised in their application

 

Recurrent Neural Networks & Sequence Data

As we know by now that RNN’s are considered to be fairly good for modelling sequence data. Let’s understand sequential data a bit while playing cricket we predict and run in the direction where the ball moves. This means recurrent networks, takes current input example they see and also what they have perceived previously in time.

Deep Learning – Introduction to Recurrent Neural NetworksThis happens with no guess or any calculation because our brain is programmed so well that we don’t even realise why we run in balls direction.

Now if we look at the recording of ball movement later we will have enough data to understand and match our action. So this is a sequence, a particular order in which one thing follows another. With this information, we can now see that the ball is moving to the right. Sequence data can be obtained from

  • Audio files – This is considered a natural sequence. Audio files clips can be broken down in the audio spectrogram and fed that into RNN’s.
  • Text file – Text is another form of a sequence, text data can be broken into characters or words (remember search engines guessing your next word or character)

Can we know comfortably say that RNN’s are good at processing sequence data for predictions on the basis of our examples above? RNNs are getting more attraction and popularity for one core reason that they allow us to operate over sequences of vectors for input and output but not just fixed size vectors.  On the downside, RNNs suffers from short-term memory.

 

Use cases – Recurrent Neural Networks

Let’s understand some of the use cases of recurrent neural networks. There are numerous exciting applications got a lot more easy, advanced and fun-filled because of RNNs. Some of them are as below.

  • Music synthesis
  • Speech, text recognition & sentiment classification
  • Image recognition (captioning)
  • Machine Translation – Language translation
  • Chatbots & NLP
  • Stock predictions

To understand how to build and train Recurrent Neural Networks (RNNs), and commonly-used variants such as GRUs and LSTMs.

Deep Learning – Introduction to Recurrent Neural Networks

There are a lot of free/paid courses available on the internet. At AILabPage we also conduct hands-on classroom training in our labs to train deep learning enthusiast. These courses can help you to solve natural language problems, including text synthesis. Ultimately you will have the opportunity to build a deep learning project with cutting-edge, industry-relevant content.

RNNs model has been proven to perform extremely well on temporal data. It has several variants including LSTMs (long short-term memory), GRUs (gated recurrent unit) and Bidirectional RNNs. Building models for natural language, audio files, and other sequence data got a lot easier with sequence algorithms.

 

Vanishing and Exploding Gradient Problem

The deep neural network has a major issue around gradient as its very unstable. Due to its unstable nature, it tends to either explode or vanish in earlier layers quickly. Vanishing gradient problem emerged in the 1990s as a major obstacle for RNNs performance. In this problem adjusting weights to decrease errors “synch problem” lead network to ceases to learn at the early stage itself.

Deep Learning – Introduction to Recurrent Neural Networks.

This problem sent the major set back for RNNs to get popularity.  Values in RNNs can explode or vanish due to a simple reason for remembering previous values. Previous values are good enough to confuse them and cause current values to keep increasing or decreasing & take-over algorithm. Indefinite loops get formed that brings whole network halt.

For example, neurons might get stuck in the loop where it keeps multiplying the previous number to new number which can go to infinity if all numbers are more than one or get stuck at zero if any number is zero.

 

Not Covered here

Topics we have not covered in this post but are extremely critical and important to understand in order to get little more strong hands-on RNNs as below.

  • Sequential Memory
  • Backpropagation in a Recurrent Neural Network(BPTT)
  • LSTM’s and GRU’s

 

Points to Note:

All credits if any remains on the original contributor only. We have covered all basics around Recurrent Neural Network. RNNs are all about modelling units in sequence. The perfect support for Natural Language Processing – NLP tasks. Though often such tasks struggle to find best companion between CNN’s and RNNs algorithms to look for information.

 

Books + Other readings Referred

  • Research through open internet, news portals, white papers and imparted knowledge via live conferences & lectures.
  • Lab and hands-on experience of  @AILabPage (Self-taught learners group) members.
  • This useful pdf on NLP parsing with Recursive NN.
  • Amazing information in this pdf as well.

 

Feedback & Further Question

Do you have any questions about Deep Learning or Machine Learning? Leave a comment or ask your question via email. Will try my best to answer it.

 

Machine Learning (ML) - Everything You Need To KnowConclusion –  I particularly think that getting to know the types of machine learning algorithm actually helps to see a somewhat clear picture. The answer to the question “What machine learning algorithm should I use?” is always “It depends.” It depends on the size, quality, and nature of the data. Also, what is the objective/motive data torturing? As more we torture data more useful information comes out. It depends on how the math of the algorithm was translated into instructions for the computer you are using. And it depends on how much time you have. To us, at AILabPage we say machine learning is crystal clear and simple task. It is not only for PhDs aspirants but it’s for you, us and everyone.

 

======================= About the Author =======================

Read about Author at : About Me

Thank you all, for spending your time reading this post. Please share your opinion / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.

FacebookPage    ContactMe      Twitter

============================================================

Facebook Comments
Advertisements

2 replies »

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.