Amongst the recent breakthroughs of machine learning these past years is the field of generative models, able to create always more realistic samples from various finite datasets of images, videos, or sounds. At the forefront of this revolution are Diffusion Models (DMs) exploiting a stochastic mapping starting from a simple distribution to generate new samples of an unkown distribution. However, the reasons for their success still lack a theoretical understanding. In this talk, I will give a brief introduction to diffusion models and then delve into the analysis of a well-defined high-dimensional model, a mixture of two Gaussians. Using methods from statistical physics, we will exhibit the various transitions taking place during the generation of a new sample. In particular, we first identify a ‘speciation’ transition where the sample acquire its structure, later followed by a second transition, called ‘collapse’, where the trajectories become attracted to one of the training point. These theoretical findings, which we establish in the high-dimensional limit of the Gaussian mixture model, will then be generalised and validated by numerical experiments on realistic datasets.