Recursive Neural Networks: When the same set of weights is applied recursively on structured inputs with the expectation of getting structured prediction, we get a kind of deep neural network called a recursive neural network. Recursive networks are non-linear adaptive models that can learn structured information. RNNs are quite complex in themselves.
Recursive neural networks excel with hierarchical data due to their complex tree-like structure. The tree structure combines nodes to create higher levels. Parent-child bonds are represented by weight matrices, and similar matrices are matched with similar progenies.
What is Deep Learning?
“Deep learning is an undeniably mind-blowing” machine learning technique that teaches computers to do what comes naturally to humans: learn by example. It can be used with ease to predict the unpredictable”. Researchers and engineers are busy creating artificial intelligence by using a combination of non-bio-neural networks and natural intelligence”.
Deep learning, in short, is going much beyond machine learning and its algorithms that are either supervised or unsupervised. DL uses many layers of nonlinear processing units for feature extraction and transformation. It has revolutionized today’s industries by demonstrating near-human-level accuracy in certain tasks. tasks like pattern recognition, image classification, voice or text decoding, and many more.
Deep Learning is a key technology
- To voice control in mobile devices like handphones, TVs, vice command enabled speakers and TVs
- Behind driverless cars, enabling them to recognise a stop sign or to distinguish a pedestrian from a lamppost.
- Has revolutionised, image processing & classification and also speech recognition with high accuracy.
Deep learning has been getting lots of attention lately, and for good reason. It is achieving results that were not possible before. Business leaders and the developer community absolutely need to understand what it is, what it can do, and how it works.
What is Recursive Neural Networks?
Recursive neural networks are family members and a kind of deep neural network. They are generally created after applying the same set of weights recursively to the structured inputs. This happens at every node for the same reason. RNNS are comprised of a class of architecture that operates on structured inputs, particularly directed acyclic graphs.
Recursive Neural Networks: Call it a deep tree-like structure. When the need is to parse a whole sentence, we use a recursive neural network. A tree-like topology allows branching connections and hierarchical structure. Arguments here can be made about how recursive neural networks are different from recurrent neural networks. We can simply divide recursive neural network approaches into two categories as shown below.
- Inner Approach – This approach usually conducts recursion inside the underlying graph and the objective is achieved usually by moving forward slowly around the edges of the graph.
- Outer Approach – This approach usually conducts recursion outside the underlying graph and aggregates information over progressively longer distances in a rectangular direction.
RNNs are used to predict structured outputs over variable-size input structures, sometimes a scalar prediction as well, by traversing a given structure in topological order. Recursive neural networks respond not only to input but to context as well. They process each input of the time series separately. The first introduction of RNNs happened just to meet the need to learn distributed representations of structure, such as logical terms.
Recurrent neural networks are recursive artificial neural networks with a certain structure, that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations. Recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step. [As per Wikipedia]
- Questions – How recursive neural networks are different from recurrent neural networks?
- Answer – Recurrent neural networks are in fact recursive neural networks with a particular structure: that of a linear chain.
RvNNs are a hierarchical kind of network with no time aspect to the input sequence, but the input has to be processed hierarchically in a tree fashion. In other words, the recursive neural network is just a generalization of the recurrent neural network. A fixed number of children is assigned to each tree node to ensure identical weights and recursion. RvNNs are used to analyze full sentences.
A sentiment analysis system uses a neural network to assess emotional phrases. NLP detects the writer’s mood and attitude in sentences. Writers are recognized for their descriptive style when expressing feelings. Our goal is to identify and arrange constituents for syntactical examination. It can distinguish between positive and negative sentences.
Recurrent vs Recursive Neural Networks
Recurrent and recursive neural networks are two interchangeable terms that can be used to refer to the same thing. Most commonly, they are referred to using the identical abbreviation, RNN. Recursive networks can be understood as an expansion of recurrent networks, as both involve repetition over a period of time. Essentially, they share common characteristics.
It is quite simple to see from the picture why it is called a recursive neural network. Each parent node’s children are simply nodes similar to that node. So it’s quite evident that it is more like a hierarchical network where there is really no time aspect to the input sequence, but the input has to be processed hierarchically in a tree fashion. Recursive neural networks operate on any hierarchical structure, combining child representations into parent representations.
In general, the difference between recursive and recurrent networks is not well defined. The efficiency of a recursive network is better than that of a feed-forward network. Recurrent neural networks are generally chain-like structures as they really don’t branch, but for recurrent, they are more of a deep tree structure. Recurrent networks have difficulty dealing with a tree-like structure that is not recurrent. When you parse a sentence (NLP processing), the easy way is to apply tree-like topology, which involves branching of connections.
As we now know, networks that operate on structured classes are more recursive. If we stack multiple recursive layers, then those can be called deep recursive neural networks. In a recurrent network, the weights are shared along the length of the sequence, though dimensionality remains constant. The answer to why this is the case is simply because it helps to deal with position-dependent weights when encountering a sequence at test time of different lengths at train time.
Principles of Recursive Neural Networks
Recurrent neural networks are, in fact, recursive neural networks. Because recursive networks are mainly inherently complex, they are not yet accepted broadly. They are quite expensive in the computational learning phase. Usually, these results are produced by systematically applying a consistent set of weights to the arranged inputs in a repetitive manner. This occurrence happens consistently across all points due to identical causative elements.
Recursive neural networks belong to the same family of models as deep neural networks, given that they can be seen as a modification of them. These computational models are suited for both classification and regression problems, being capable of solving supervised and unsupervised learning tasks. (Reference from “Artificial Neural Networks: ICANN 2009: 19th International Conference”)
Recurrent Neural Networks, or RNNs, are a type of structured model that works on directed acyclic graphs and is specifically designed to manage arranged inputs.
It depends on how the math of the algorithm was translated into instructions for the computer you are using. And it depends on how much time you have. To us at AILabPage, we say machine learning is a crystal-clear and simple task. It is not only for PhD aspirants; it’s for you, us, and everyone.
Not Covered here
Topics we have not covered in this post but are extremely critical and important to understand to get a little more strong hands-on RNNs as below.
- Sequential Memory
- LSTM’s and GRU’s
Points to Note:
All credits, if any, remain with the original contributor. We have covered all the basics of recursive neural networks. RNNs are all about modeling units in sequence. The perfect support for natural language processing (NLP) tasks Though often such tasks struggle to find the best companion between CNN and RNN algorithms to look for information,
Books + Other readings Referred
- Research through the open internet, news portals, white papers, and imparted knowledge via live conferences and lectures.
- Lab and hands-on experience of @AILabPage (Self-taught learners group) members.
- This useful pdf on NLP parsing with Recursive NN.
- Amazing information in this pdf as well.
Conclusion – I particularly think that getting to know the types of machine learning algorithms actually helps to see a somewhat clear picture. The answer to the question “What machine learning algorithm should I use?” is always “It depends.” It depends on the size, quality, and nature of the data. Also, what is the data torturing’s objective/motive? As we torture data, more useful information comes out.
It depends on how the math of the algorithm was translated into instructions for the computer you are using. And it depends on how much time you have. To us, at AILabPage we say machine learning is crystal clear and simple task. It is not only for PhDs aspirants but it’s for you, us and everyone.
======================= About the Author =======================
Read about Author at : About Me
Thank you all, for spending your time reading this post. Please share your opinion / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.