Deep Learning is an algorithm which has no theoretical limitations of what it can learn; the more data you give and the more computational time you give, the better it is – Sir Geoffrey Hinton (Google).
The true challenge to Artificial Intelligence is to prove and solve the tasks that are easy for human to perform but hard to describe formally. Problems that we solve intuitively, that feel automatic, like recognizing spoken words or faces in images. In deep learning this is the task we try to solve at AILabPage research.
A technique for implementing machine learning. At the same time I also claim It is absolutely wrong to call Deep Learning as Machine Learning (in my opinion). The technique is to achieve a goal not necessarily come out of same goal.
Deep learning’s main driver are artificial neural networks system or neural networks or neural nets. There are some specialized versions also available. Such as convolution neural networks and recurrent neural networks. These addresses special problem domains. Two of the best use cases for Deep Learning which are unique as well. These are image processing and text/speech processing based on methodologies like Deep Neural Nets.
In practice Deep Learning methods, specifically Recurrent Neural Networks (RNN) models are used for complex predictive analytics. Like share price forecasting and it consist of several stages. DL also includes decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.
Deep learning is the first class of algorithms that is scalable. Performance just keeps getting better as we feed the algorithms more data. Speech/Text and image processing can make perfect robot to start with and actions based on triggers makes it the best. It has to pass basic 4 tests. Turning test i.e needs to acquire college degree, needs to work as an employee for at least 20 years and do well to get promotions and meet ASI status.
Deep Learning is not Machine Learning
The major point where DL differs from ML in its working style. ML works based on past and present figures and then take an educated guess (sort of) into the future where DL goes much beyond just the guess. It uses the data patterns to make decisions and predictions with real-world examples from healthcare involving genomics and preterm birth.
Deep learning takes feature engineering to the next level by automating feature engineering. There are deep learning methodologies to directly learn from the raw data and map to the intended goals. To uncover hidden themes in large collections of documents using topic modeling. We will see the responses to these questions in subsequent blog posts.
If supremacy is the basis for popularity then surely Deep Learning is almost there (At least for supervised learning tasks). Deep learning attains the highest rank in terms of accuracy when trained with huge amount of data.
The analogy to deep learning is that the rocket engine is the deep learning models and the fuel is the huge amounts of data we can feed to these algorithms. – Sir Andrew Ng.
One of the biggest issue / limitation of Deep Learning is it requires high-end machines as oppose to traditional Machine Learning algorithms. GPU’s has become a basic or for granted requirement to perform any Deep Learning related algorithm.
There are several advantage of using deep learning over traditional Machine Learning algorithms. Deep learning out perform if the data size is large. But with small data size, traditional Machine Learning algorithms are preferable.
- Knowing the unknown – DL techniques outshines others and avoid worries of lack of domain understanding for feature introspection or where there is a less understanding about feature engineering.
- Nothing is complex – For complex problems such as image, video, voice recognition or natural language processing, DL works like a charm.
On the other hand when it comes to unsupervised learning, research using deep learning yet to see similar behaviour like supervised learning tasks. Responses to valid argument on (if any) “if not deep learning” then why not Hierarchical Temporal Memory (HTM) will cover in upcoming posts.
Points to Note:
All credits if any remains on the original contributor only. We have now elaborated our earlier posts on “AI, ML and DL – Demystified” for understanding Deep Learning only. You can find earlier posts on Machine Learning – The Helicopter view, Supervised Machine Learning, Unsupervised Machine Learning and Reinforcement Learning links.
Conclusion – Deep Learning in short is going much beyond machine learning and its algorithms that are either supervised or unsupervised. In DL it uses many layers of nonlinear processing units for feature extraction and transformation. Learning is based on multiple levels of features or representation in each layer with the layers forming a hierarchy of low-level to high-level features Where traditional machine learning focuses on feature engineering, deep learning focuses on end-to-end learning based on raw features. Traditional deep learning create / train test splits of the data where ever possible via cross-validation. Load ALL the training data into main memory and compute a model from the training data.
Books + Other readings Referred
- Open Internet
- Hands on personal research work @AILabPage
============================ About the Author =======================
Read about Author at : About Me
Thank you all, for spending your time reading this post. Please share your opinion / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.
Categories: Deep Learning