Mandate for Humans – Deep Learning. This is part 2 story in “AILabPage’s DeepLearning Series”. Focus here is on deep learning’s basic terms which revolve and evolve around it. Find the first part here – DeepLearning Basics : Part-1
This post is work in progress. I will continuously update it. If any mistake or think an important term is missing, please let me know in the comments or via email.
What we will cover here
Deep Learning terminology can be quite overwhelming to newcomers.. This blog post covers important aspect of deep learning which can be defined as set of techniques that uses neural networks to simulate human decision-making skills.
- Deep learning Computational Models
- How Deep learning learns.
- Frequently used jargons in deep learning
- Deep learning Algorithms – High level view
- Implementation of Deep Learning Models
- Deep learning limitations
- Notable Use Cases & Applications
Some Basics Around Sciences – Mandate for Humans
Before we going deeper in deep learning the main agenda of this blog post, lets understand basic of basics. 2 type of sciences we see on almost every day around us i.e
- Hard Sciences – Physics, chemistry, biology etc.
- Computer engineer can develop some system architecture and system model that can actually take shape later in reality and do as claimed.
- Soft Sciences – Economics, political science etc.
- Sales / marketing teams can give an amazing presentation about how certain product will do over the next five years. In returns expect good budget for same, and yet they can fail with big chance.
So difference is pretty clear i.e hard science has the ability to make complex models of the world that work but soft does not have any such ability. Deep learning falls under hard science.
What is Deep Learning (Helicopter View)
Deep learning is a technique or an artificial intelligence power that teaches computers to tasks and ability to understand anything. This process is as similar as it comes naturally to humans i.e. learn by examples. A key advantage of deep learning networks is that they often continue to improve as the size of your data increases.
Driverless cars, speech recognitions, image processing are the key solutions powered by this technology. Cars gets ability to understand the difference between pedestrian and a lamppost. Image processing also helps to recognize a road signs and distinguish based on marking. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.
Deep learning Computational Models
The human brain is a deep and complex recurrent neural network. Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. In a very simple words and not to confuse anything/anyone here, we can define both models as below.
- Feed forward propagation – Type of Neural Network architecture where the connections are “fed forward” only i.e. input to hidden to output The values are “fed forward”.
- Back propagation (supervised learning algorithm) is a training algorithm with 2 steps:
- Feed forward the values
- Calculate the error and propagate it back to the layer before.
In short forward-propagation is part of the back propagation algorithm but comes before back-propagating.
How Deep Learning Learns
The computational models of brain information processing in vision and beyond, have largely shallow architectures performing simple computations. Human brain till date have dominated computational neuroscience and will keep doing so for next couple of decades. Most deep learning methods use neural network architectures, which is why deep learning models are often referred to as deep neural networks.
Deep learning is based on multiple levels of features or representation in each layer with the layers forming a hierarchy of low-level to high-level features. Traditional machine learning focuses on feature engineering but deep learning focuses on end-to-end learning based on raw features.
Deep Learning is a machine learning method. It allows to do predictive analytics outputs form given a set of inputs. DL can use supervised and unsupervised learning to train the model. The term “deep” usually refers to the number of hidden layers in the neural network. Traditional neural networks only contain 2-3 hidden layers, while deep networks can have 100 or even more.
Deep learning create / train test splits of the data where ever possible via cross-validation. It load training data into main memory and compute a model from the training data. Unlike in deep dream which can generate new images, or transform existing images and give them a dreamlike flavor, especially when applied recursively.
Frequently used jargons in deep learning
- Perceptrons – A single layer neural network. Perceptron is a linear classifier. It is used in supervised learning. In this computing structures are based on the design of the human brain and algorithms takes a set of inputs and returns a set of outputs.
- Multilayer Perceptron (MLP)- A Multilayer Perceptron is a Feedforward Neural Network with multiple fully-connected layers that use nonlinear activation functions to deal with data which is not linearly separable.
- Deep Belief Network (DBN) – DBNs are a type of probabilistic graphical model that learn a hierarchical representation of the data in an unsupervised manner.
- Deep Dream – A technique invented by Google that tries to distill the knowledge captured by a deep Convolutional Neural Network.
- Deep Reinforcement Learning (DRN) – This is a powerful and exciting area of AI research, with potential applicability to a variety of problem areas. Other common terms under this area are DQN, Deep Deterministic Policy Gradients (DDPG) etc.
- Deep Neural Network (DNN) - A neural network with many hidden layers. There is no hard coded definition on how many layers minimum a deep neural network has to have. Usually minimum 4-5 or more.
- Recurrent Neural Networks (RNN) – A neural network to understand the context in speech, text or music. The RNN allows information to loop through the network,
- Convolutional Neural Network (CNN) – A neural networks, to do images recognition, processing and classifications. Objects detections, face recognition etc. are some CNNs expertise where it is widely used.
- Recursive Neural Networks – A hierarchical kind of network where with no time aspect to the input sequence but the input has to be processed hierarchically in a tree fashion.
Books + Other readings Referred
- Open Internet
- Hands on personal research work @AILabPage
Conclusion – Deep learning would not exist if the digital revolution hadn’t made big data available. In this learning scope is much beyond machine learning. Deep Learning terminology can be quite overwhelming to newcomers. The algorithms used in this can be supervised or unsupervised. It uses many layers of nonlinear processing units for feature extraction and transformation. Deep Learning techniques have become popular in solving traditional Natural Language Processing problems like sentiment analysis through RNN and image processing through CNN. Artificial Neurons can simply be called as a computational model of the human brain. The boundary between “Deep Learning and Machine Learning” is quite fuzzy, complex and simple at the same time. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers.
============================ About the Author =======================
Read about Author at : About Me
Thank you all, for spending your time reading this post. Please share your opinion / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.
Categories: Artificial Intelligence