Generative Adversarial Networks (GANs) – Combination of two neural networks which is a very effective generative model network, works simply opposite to others. The other neural network models take usually complex input and output is simple but in GANs it’s just opposite. GANs are a very young family member of Deep Neural Network Architecture. Introduce by Ian Goodfellow and his team at the University of Montreal in 2014. GANs are a class of unsupervised machine learning algorithm.
Adversarial training “The most interesting idea in the last 10 years in the field of Machine Learning.” – Sir. Yann LeCun, Facebook’s AI research director.
Let’s Unpack This Jargon – GANs Intro
A Generative Adversarial Network is simply another type of neural network architecture used for generative models. In simple words for creating samples, similar but not have some unique differences.
As per the definition of word “Adversarial” from open internet “Involving two people or two sides who oppose each other i.e. adversary procedures. An adversarial relationship an adversarial system of justice with prosecution and defence opposing each other. Wondering why there is a need of the system to generate almost real looking images when it can be misused more than correct use. Will find out in details below:
GANs are a class of algorithms used in the unsupervised learning environment. As the name suggests they are called as Adversarial Networks because they are is made up of two competing neural networks. Both networks compete with each other to achieve a zero-sum game. Both neural networks are assigned different job role i.e. contesting with each other.
- Neural Network one is called the Generator because it generates new data instances.
- Other neural net is called as Discriminator, evaluates work for the first neural net for authenticity.
The cycle continues to obtain accuracy or near perfection results. To understand GANs lets take the scenario from my home; My son who is the much better player of chess than me and If I want to be a better player then what I am?, I should play with him. Still confused, it’s ok to let me try to give a real-world example as below.
Lets term two neural networks which compete as an opponent and advisor in GANs. My self as “Generator” and my son as “Discriminator” who is much more powerful than me in game of chess. Now if I play continuously with him then my game would improve for sure. On back-end of this scenario I would analyze what I did wrong, what he did right. Further I need to think of strategy which will help me to beat him in the next game.
From the above example, it’s very clear that I need to repeat my strategy with continuous improvement and learnings from each attempt until he (my son) gets defeated. If this entire concept gets programmed and incorporated to build my data models (strategy). In short, I can say for getting better in my game ( generator), I need to be more clever and learn more from my powerful opponent (discriminator). Though I never want to win against him which is not the case in GANs.
The Generator
The generator network job is to create random synthetic outputs i.e. images or pictures etc. This neural network is called the Generator because it generates new data instances.
The Discriminator
The discriminator tries to identify and make efforts to inform whether the input is real or fake. Discriminator evaluates work for the first neural net for authenticity.
How do GANs work?
As mentioned and described in the image above GANs consist of two neural networks i.e. Generator that generate a fake image of our currency note example and discriminator classifies it into real or fake. Generator role is to map the input to the desired data space (image as in the example above). On the other hand second neural network models i.e. the discriminator to classify the output with probability as real or fake comparing with real datasets.
The whole idea of the game is to face off with each other ( two neural networks) and get better at each attempt. The end result is expected as our generator network produces realistic or almost outputs. The discriminator gets fully trained to perform its job which is to classify correctly the input data as either real or fake. This informs the weights used gets updated and marked the probabilities as
- Maximised – Any real data input is now classified as “Belongs to the real dataset”
- Minimised – Any fake data input is now classified as “Belongs to the real dataset”
Generators get trained to fool the discriminator by generating data as close to real as possible. So at this step, the generator’s weights used gets updated and marked the probabilities as
- Maximised – Any fake data input is now classified as “Belongs to the real dataset”
At the end of several training iterations, the conclusion gets made whether the Generator and Discriminator have reached to a point where no further improvement can be made. This is the time when the generator generates (fake) realistic synthetic data, the discriminator fails to differentiate between fake and real. So with the above two scenarios, it’s clear that during the training loss functions gets optimised in opposite directions. A similar situation from our example where both parties feel they are at their best.
Steps Involved in Train GANs
Step 1: Problem definition: The goal needs to be very clear i.e. whether to generate fake video by feeding with live video images or fake text. One needs to be very clear as you won’t discover anything.
Step 2: Setting up and define architecture: Define This step needs to understand and define with the problem and need in mind. Whether architecture for the model would be convolutional neural networks? or just simple multilayer perceptrons, for the generator and the discriminator.
Step 3: Discriminator training with real data: From example, we will feed real currency images (will use convolutional neural networks) as that’s what we want to generate fake. To train the discriminator to correctly predict them as real.
Step 4: Discriminator training with fake inputs: Collect generated data and let the discriminator correctly predict them as fake.
Step 5: Generator training with the output of discriminator. Now when the discriminator is trained, you can get its predictions and use it as an objective for training the generator. Train the generator to fool the discriminator.
Step 6: Loop Iteration 3 to step 5
Step 7 & 8: Checking and validating: Fake data manually for performance. Make the decision to continue training or stop and do performance check.
AAA
Challenges with GANs
GANs have its main focus on computer vision domain as part of its area of attention in the field of research and practical applications.
- GANs are more unstable compared to other neural networks and very difficult to train.
- GAN’s nature of destabilisation it never converge and because of this non-convergence problem, the model oscillate with its parameters this cause overfitting,
- To train two networks we need to use only single backpropagation thus this makes difficult to choose the correct objective.
- The generator often collapses and produce limited sample variations
- The discriminator gets to its pinnacle too fast and causes the generator gradient to vanish thus may not learn anything.
- Extremely sensitive to the hyper-parameter selections.
The most interesting use case of GANs is to generate sequences of video and images in order to predict sequences of a video frame or gif file.
Use Cases
Generative adversarial networks have several real-life business use cases, few of them are listed below:
- Detection of counterfeit currency
- Making original fake artwork samples.
- Simulation and planning using time-series data to be used in videos or audios
Learn Directly from the Creator
Must watch videos
Points to Note:
All credits if any remains on the original contributor only. We have covered all basics around the Generative Neural Network. Though often such tasks struggle to find the best companion between CNNs and RNNs algorithms to look for information.
Books + Other material Referred
- Research through open internet, news portals, white papers and imparted knowledge via live conferences & lectures.
- Lab and hands-on experience of @AILabPage (Self-taught learners group) members.
- NIPS 2016 Tutorial: GANs
- Generative Adversarial Networks
- Unsupervised Representation Learning with Deep Convolutional GAN
Feedback & Further Question
Do you have any questions about Deep Learning or Machine Learning? Leave a comment in the comment section or ask your question via email. Will try my best to answer it.
Conclusion: In this post, we have learnt some high-level basics of GANs- Generative Adversarial Networks. GANs are recent development efforts but look very promising and effective for a much real-life business use case. One thing to here the two networks G &D are designed to contest and not work against to pull other down. Both works together to achieve something big. Discriminator helps and teach the generator with constant feedback and give an indirect suggestion of what to adjust, this process trains generator well and strong. Commercial models of GANs are out but still, they are in the research phase as we get new variants quite frequently.
======================== About the Author ===================
Read about Author at : About Me
Thank you all, for spending your time reading this post. Please share your opinion / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.
FacebookPage ContactMe Twitter
============================================================
[…] Generative Adversarial Networks […]
LikeLike
[…] Semi-supervised Learning: This type of ml i.e. semi-supervised algorithms are the best candidates for the model building in the absence of labels for some data. So if data is mix of labels and non-labels then this can be the answer. Typically a small amount of labeled data with a large amount of unlabelled data is used here. […]
LikeLike
[…] Generative Adversarial Networks […]
LikeLike
[…] Generative Adversarial Networks […]
LikeLike
[…] data, computing power and algorithms to look for information. In the previous post we covered Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] data, computing power and algorithms to look for information. In the previous post we covered Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] Generative Adversarial Networks […]
LikeLike
[…] to look for information. How machine can do more then just translation this is covered in Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] https://vinodsblog.com/2018/11/23/generative-adversarial-networks-gans-the-basics-you-need-to-know/?… […]
LikeLike
[…] In the upcoming post, we will cover new type machine learning task under neural network Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] computing power and algorithms to look for information. In the upcoming post, we will talk about Generative Adversarial Networks. A family of artificial neural networks which a threat and blessing to the physical currency […]
LikeLike
[…] computing power and algorithms to look for information. In the upcoming post, we will talk about Generative Adversarial Networks. A family of artificial neural networks which a threat and blessing to the physical currency […]
LikeLike
[…] in 2014. GANs are a class of unsupervised machine learning algorithm. In our previous post, “Deep Learning – Introduction to GANs”. I introduced the basic analogy, concept, and ideas behind “How GANs work”. This post […]
LikeLike
[…] computing power and algorithms to look for information. In the upcoming post, we will talk about Generative Adversarial Networks. A family of artificial neural networks which a threat and blessing to the physical currency […]
LikeLike
[…] data, computing power and algorithms to look for information. In the previous post, we covered Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] to look for information. How machine can do more than just translation this is covered in Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] to look for information. How machine can do more than just translation this is covered in Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] data, computing power and algorithms to look for information. In the previous post, we covered Generative Adversarial Networks. A family of artificial neural […]
LikeLike
[…] to obtain accuracy or near perfection results. Still confused, it’s ok to read this post on “Generative Adversarial Networks“; you will find more details and […]
LikeLike
[…] Read More […]
LikeLike
[…] Read More Generative Adversarial Networks (GANs) – Combination of two neural networks which is a very effective generative model network, works simply opposite to others. The other neural network models take… […]
LikeLike
[…] Read More Generative Adversarial Networks (GANs) – Combination of two neural networks which is a very effective generative model network, works simply opposite to others. The other neural network models take… […]
LikeLike
[…] to look for information. How machine can do more than just translation this is covered in Generative Adversarial Networks. A family of artificial neural […]
LikeLike