Artificial Intelligence

Deep Learning – Mandate for Humans, Not Just Machines

Share this

 

Mandate for Humans – Deep Learning. This is part 2 story in “AILabPage’s DeepLearning Series”. Focus here is on deep learning’s basic terms which revolve and evolve around it. Find the first part here – DeepLearning Basics : Part-1

This post is work in progress. I will continuously update it. If any mistake or think an important term is missing, please let me know in the comments or via email.

 

What we will cover here

Deep Learning terminology can be quite overwhelming to newcomers.. This blog post covers important aspect of deep learning which can be defined as set of techniques that uses neural networks to simulate human decision-making skills.

  • Deep learning Computational Models
  • How Deep learning works.
  • Frequently used jargons in deep learning
  • Deep learning Algorithms – High level view
  • Implementation of Deep Learning Models
  • Deep learning limitations
  • Notable Use Cases & Applications

 

Some Basics Around Sciences – Mandate for Humans

Before we going deeper in deep learning the main agenda of this blog post, lets understand basic of basics. 2 type of sciences we see on almost every day around us i.e

  • Hard Sciences – Physics, chemistry, biology etc.
    • Computer engineer can develop some system architecture and system model that can actually take shape later in reality and do as claimed.
  • Soft Sciences – Economics, political science etc.
    • Sales / marketing teams can give an amazing presentation about how certain product will do over the next five years. In returns expect good budget for same, and yet they can fail with big chance.

So difference is pretty clear i.e hard science has the ability to make complex models of the world that work but soft does not have any such ability. Deep learning falls under hard science. 

 

Deep learning Computational Models

The human brain is a deep and complex recurrent neural network. Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. In a very simple words and not to confuse anything/anyone here, we can define both models as below.

  • Feed forward propagation –  Type of Neural Network architecture where the connections are “fed forward” only  i.e. input to hidden to output The values are “fed forward”.
  • Back propagation (supervised learning algorithm) is a training algorithm with 2 steps:
    • Feed forward the values
    • Calculate the error and propagate it back to the layer before.

In  short forward-propagation is part of the back propagation algorithm but comes before back-propagating.

 

How Deep Learning Learns

The computational models of brain information processing in vision and beyond, have largely shallow architectures performing simple computations. Human brain till date have dominated computational neuroscience and will keep doing so for next couple of decades.

Deep learning is based on multiple levels of features or representation in each layer with the layers forming a hierarchy of low-level to high-level features. Traditional machine learning focuses on feature engineering but deep learning focuses on end-to-end learning based on raw features.

Deep Learning - Mandate for Humans, Not Just Machines

AILabPage’s – Deep Learning Series

Deep Learning is a machine learning method. It allows to do predictive analytics outputs form given a set of inputs. DL can use supervised and unsupervised learning to train the model.

Deep learning create / train test splits of the data where ever possible via cross-validation. It load training data into main memory and compute a model from the training data.

 

Frequently used jargons in deep learning

  • Perceptrons –  A single layer neural network. Perceptron is a linear classifier. It is used in supervised learning. In this computing structures are based on the design of the human brain and algorithms takes a set of inputs and returns a set of outputs.
  • Multilayer Perceptron (MLP)- A Multilayer Perceptron is a Feedforward Neural Network with multiple fully-connected layers that use nonlinear activation functions to deal with data which is not linearly separable.
  • Deep Belief Network (DBN) – DBNs are a type of probabilistic graphical model that learn a hierarchical representation of the data in an unsupervised manner.

 

Books + Other readings Referred

  • Open Internet
  • Hands on personal research work @AILabPage

 

Sign-tConclusion – Deep learning would not exist if the digital revolution hadn’t made big data available. In this learning scope is much beyond machine learning. The algorithms used in this can be supervised or unsupervised. It uses many layers of nonlinear processing units for feature extraction and transformation. Deep Learning techniques have become popular in solving traditional Natural Language Processing problems like sentiment analysis through RNN and image processing through CNN. Artificial Neurons can simply be called as a computational model of the human brain.

 

 

============================ About the Author =======================

Read about Author at : About Me

Thank you all, for spending your time reading this post. Please share your opinion / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.

FacebookPage                      ContactMe                         Twitter         ====================================================================

Advertisements

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.