Generative AI applications like ChatGPT, DALL-E 2 and Stable Diffusion have taken the world by storm, and hearing the buzzword ”Generative AI” is now unavoidable. But have you ever wondered which mathematical foundations lie beneath the technology? What does it really mean for AI to be generative, and what other types of AI are there?
The term generative model originated in the field of statistical classification, which is a subfield of mathematics about deciding which category a certain object belongs to. Take a data point (for example, a piece of text or an image of an animal), let's call it . Statistical classification concerns itself with assigning this data point to one specific category in a given list of categories; for example, ”formal” and ”casual” are two possible categories of text, or ”dogs” and ”cats” are categories of animals.
There are three main approaches on how to solve such a task:
We will first look at the most obvious type of model, discriminant models, and why they are not sufficient for most needs.
In this case we completely ignore that there exists an underlying distribution. We directly want to predict . In order to do a purely discriminant approach a common technique is statistical learning theory. We choose a loss function which allows us the formulate the expected risk that we want to minimize:
We can't calculate this though, since we don't know the underlying probability distribution and we also don't want to make any assumptions on it. Therefore we approximate it empirically:
Here stands for the -th data sample. Now we just need an algorithm that minimizes this empirical risk. Here are some famous examples:
In the approach above (purely discriminant model) we did not want to make any assumption on the underlying description when optimizing for the expected risk. But here we do. To be more precise we want to make assumptions on how we want to model . As an example we could model it like this: . (Note that in this example we induce a bias on how and relate to each other. This is a common tradeoff. So for this example we forced a linear relation between and .) Now what is left is to find out the optimal parameters . We can do this through Maximum Likelihood estimation of our training set:
What this means is that we want to choose our parameters in a way that our samples from the training data are very likely to exist in the models world. Also note that the optimization function above is analytically intractable. Therefore the common way is to use gradient descent to optimize it since it is differentiable.
Now to one we are all so hyped about. The generative model approach. Here we try to model the whole underlying data distribution. Namely: . The nice thing about this is that if we get it right we have a full understanding of the whole distribution. This means we can do outlier detection, have a degree of belief and most importantly can generate new samples. Meaning that we can create more images about cats and dogs. The usual approach is again to guess a family of parametric probabilistic models and then infer its parameters. Note that the following holds for probability distributions:
Therefore we make an assumption about (for example the same assumption that we took for the probabilistic discriminant model) and additionally about . For example we could model . Now again we try to find optimal parameters for and . Here are some famous examples:
Up until now we always explicitly made assumptions about the underlying distribution . For example that it is a Mixture of Gaussian's. However such assumptions are almost always wrong in practise. With the introduction of deep learning however we can now model a distribution as a deep neural network. Therefore we can have: = DeepNeuralNetwork which is parametrized by its parameters that we will optimize. You can already imagine that this is a big game changer. We don't have to make any model assumptions anymore and can directly model the underlying distribution. The only question remains is how well the deep neural network can approximate it. Luckily in the past years a lot of progress has been made and for most domains like text, images and even audio, it is now possible. There has been a lot of effort in finding deep neural network architectures that can do this task well and surprisingly different architectures are better suited for different domains. Here are some famous examples:
Generative model became their name because they could generate new samples. They can do that because they model the complete underlying distribution. This is a very hard problem and was made possible through deep neural networks.