Deep Learning – It can be termed the best example of the confluence of big data, big models, big computation, and big dreams. Deep learning is an algorithm that has no theoretical limitations on what it can learn; the more data and the more computational (CPU power) time you give it, the better it is. (Sir Geoffrey Hinton, Google)

The Deep Learning – An Outlook

The true challenge for artificial intelligence is to prove and solve the tasks that are easy for humans to perform but hard to describe formally. In short, deep learning is simply a class or subset of machine learning, but it has a slightly different perspective than machine learning.

Deep learning progressively extracts higher-level features from the raw input through layers; low-level layers do a different job than upper layers, where machine learning usually focuses on teaching computers to process and learn from data. Problems that we solve intuitively, that feel automatic, like recognizing spoken words, picking lyrics in music, separating all mixed songs, captioning the image, predicting the next frame in the video, or recognizing faces in images.

Deep learning uses multiple layers through which computers train themselves to process and learn from data. In deep learning, this is the task we try to solve at AILabPage Research.

Deep learning is used for tasks like facial recognition, generating and processing images, processing speech and language, and handwriting recognition. Deep learning also has an excellent success story where it combines with reinforcement learning to match human-level performance. Games like Go, Chess, Dota 2, and Quake III

The Amazon Go Store relies on image recognition powered by deep learning to detect what shoppers buy. But this also has some downsides on the cost side, as it needs very large datasets and a number of training cycles. To have this training, successful machines need to support high-powered and expensive computing and storage hardware.

Deep Learning Algorithms

A technique for implementing machine learning. DL was also known as deep structured learning or hierarchical learning. At the same time, I also claim it is wrong to call deep learning “machine learning” (in my opinion). The technique is to achieve a goal that does not necessarily come out of the same goal.

Deep learning’s main drivers are artificial neural networks, or neural networks, or neural nets. There are also some specialized versions available. Such as convolutional neural networks and recurrent neural networks. These address special problem domains. Two of the best use cases for deep learning are unique as well. These are image processing and text and speech processing based on methodologies like deep neural networks.

In practice, deep learning methods, specifically recurrent neural network (RNN) models, are used for complex predictive analytics. Like share price forecasting, and it consists of several stages. DL also includes decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.

Deep learning is the first class of algorithms that is scalable. The performance just keeps getting better as we feed the algorithms more data. Speech, text, and image processing can make a robot perfect to start with, and actions based on triggers make it the best. It has to pass four basic tests. Turning test, i.e., needs to acquire a college degree, work as an employee for at least 20 years, and do well to get promotions and meet ASI status.

Deep Learning is Not Machine Learning

Deep learning and machine learning are not the same, so you can’t use ML and DL terms interchangeably. Deep learning exists for various reasons, like its limits, potential, results, etc. This can also be called a dream enabler of machine learning. The major point where DL differs from ML is in its working style.

  • ML works based on past and present figures and then takes an educated guess into the future, whereas DL goes much beyond just this guesswork.
  • DL uses the data patterns to make decisions and predictions in the real world. For example, healthcare involving genomics and preterm births

In short, deep learning is machine learning, but you can’t still say so when it’s a serious discussion as it goes much beyond and further, like calling every layer of the earth “Earth” is wrong as it has 4 different layers with different names for a reason. (This example has been added recently to clarify my point.)

Deep learning takes feature engineering to the next level by automating it. There are deep learning methodologies to directly learn from the raw data and map it to the intended goals. To uncover hidden themes in large collections of documents using topic modeling. We will see the responses to these questions in subsequent blog posts. Artificial intelligence can be tagged as applied machine learning, deep learning, or any other technique to solve actual problems.

Why Deep Learning

If supremacy is the basis for popularity, then surely deep learning is almost there (at least for supervised learning tasks). Deep learning attains the highest rank in terms of accuracy when trained with a huge amount of data. Deep learning algorithms take highly unorganized, messy, and unlabeled data, such as video, images, audio recordings, and text.

The analogy to deep learning is that the rocket engine is the deep learning models, and the fuel is the huge amount of data we can feed to these algorithms. Sir Andrew Ng.

One of the biggest issues or limitations of deep learning is that it requires high-end machines as opposed to traditional machine learning algorithms. GPUs have become a basic or foregone requirement to perform any deep learning-related algorithm.

There are several advantages to using deep learning over traditional machine learning algorithms. Deep learning outperforms if the data size is large. But with small data sets, traditional machine learning algorithms are preferable.

  • Knowing the unknown—DL techniques outshine others and avoid worries of a lack of domain understanding for feature introspection or where there is a lack of understanding of feature engineering.
  • Nothing is complex. For complex problems such as image, video, voice recognition, or natural language processing, DL works like a charm.


On the other hand, when it comes to unsupervised learning, research using deep learning has yet to see similar behavior as supervised learning tasks. Responses to valid arguments on (if any) “if not deep learning,” then why not hierarchical temporal memory (HTM) will be covered in upcoming posts.

Books + Other readings Referred

  • Open Internet
  • Hands-on personal research work @AILabPage

Points to Note:

All credits, if any, remain with the original contributor only. We have now elaborated on our earlier posts on “AI, ML, and DL: Demystified“, for understanding deep learning only. You can find earlier posts on Machine Learning: The Helicopter ViewSupervised Machine LearningUnsupervised Machine Learning, and Reinforcement Learning here.

Uncovering Anxious Deep Learning for EaseConclusion – Deep learning, in short, is going much beyond machine learning and its algorithms that are either supervised or unsupervised. In DL, it uses many layers of nonlinear processing units for feature extraction and transformation. Learning is based on multiple levels of features or representation in each layer, with the layers forming a hierarchy of low-level to high-level features. Where traditional machine learning focuses on feature engineering, deep learning focuses on end-to-end learning based on raw features. Traditional deep learning creates or trains-tests splits of the data where ever possible via cross-validation. Load all the training data into main memory and compute a model from the training data.

============================ About the Author =======================

Read about Author at : About Me

Thank you all, for spending your time reading this post. Please share your opinion / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.

FacebookPage    ContactMe      Twitter         ====================================================================

Posted by V Sharma

A Technology Specialist boasting 22+ years of exposure to Fintech, Insuretech, and Investtech with proficiency in Data Science, Advanced Analytics, AI (Machine Learning, Neural Networks, Deep Learning), and Blockchain (Trust Assessment, Tokenization, Digital Assets). Demonstrated effectiveness in Mobile Financial Services (Cross Border Remittances, Mobile Money, Mobile Banking, Payments), IT Service Management, Software Engineering, and Mobile Telecom (Mobile Data, Billing, Prepaid Charging Services). Proven success in launching start-ups and new business units - domestically and internationally - with hands-on exposure to engineering and business strategy. "A fervent Physics enthusiast with a self-proclaimed avocation for photography" in my spare time.


  1. […] ConvNets have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self driving cars. Yann LeCun was named LeNet5 after many previous successful iterations since the year 1988. LeNet was one of the very first convolutional neural networks which has pushed forward  Deep Learning.  […]


Leave a Reply