Python-bloggers

Fine tune a Dreambooth model for image generation using Stable Diffusion with PyTorch

This article was first published on Python – Hutsons-hacks , and kindly contributed to python-bloggers. (You can report issue about the content on this page here)
Want to share your content on python-bloggers? click here.

I recently did a great talk at Leeds Data Science, where I presented how to fine tune a Stable Diffusion model, using Google’s Dreambooth method, to fine tune the model to create interesting image concepts for generation. This was well received (see: https://www.linkedin.com/posts/jumping-rivers-ltd_meetup-datascience-imagerecognition-activity-7067151517916549120-bv5g?utm_source=share&utm_medium=member_desktop), therefore, I thought, “shall I do a blog to aid everyone?”. The answer to this lies in the following.

What is Stable Diffusion?

There are various papers that show the important concepts of stable diffusion, but in essence, we use an input image; generate some random noise to the image; up and down sample the image using a Unet architecture to pick out the relevant segments in the image and then we use a variational autoencoder to take the random noise that has been encoded and decode it using the variational autoencoder to paint the images, when we say paint, we mean the autoencoder then uses the decoder block to guess (predict) what it thinks the image is. This process is not one model, it is several, all working together in harmony.

The diagram hereunder shows the general process we go through with the diffusion process:

Before we step into a high level description, I will break down some important parts of the diagram above:

In the next step I will delve into how the CLIP, VAEs and UNets work.

Let’s explore some of the models we use in this process

The models use what is known as a Latent Diffusion Model (LDM) to process these images through.

OpenAI’s CLIP is responsible for the mapping of text prompts to the representative images (this is the conditioning phase of the diagram above). The general process is described below:

As you see, the text encoder and image are represented by a diagonal product of the text to image. This gives us the ability to link our text to images. The UNET works as below:

It takes the embedding, random noise (latents) and the relevant timestep, as the stable diffusion models everything based on time step (zT). This way, the model can learn new features everytime we pass it through a generation.

The last piece of the puzzle is the image painter, that is the variational autoencoder which is responsible for taking the encoded information and then trying to make a prediction, but the cool part about this is that the diffusion process changes the images slightly, with a combination of the random noise we add to the image. After many generations, we get something that starts to change the image over each time step. Furthermore, the great part about this, is that because the model uses a multiple attention head over each feature map in the image, the model learns from each of the attention heads by the query, keys and values. Read about how VIT and Transformers work to understand this concept.

What does the Dreambooth do?

Dreambooth is an extension to the methods we have described hereunder, with the overall aim to generate a personalised-text-to-image model that can be used for later inference. The HuggingFace visual (credits to HuggingFace) show this process:

Here we can have a small sample of images with the class name of the object we are trying to use as our candidate for generation. This can be anything from landscapes, to people, to cartoon characters, to styling new shoes, you name it. We use a pretrained text-to-image model, do some fine tuning (i.e. I want to use my dog in this image) and give the output a unique identifier V. At this point, you get a personalised text-to-image model. All there is to do then is to engineer the prompt to start the generation process.

Put simply…

All you need to do is decide on your concept:

Once you have your concept in mind, we can go through how you want to store your images, how to load them in with the scripts provided with this tutorial and how you will then fine tune your model.

Pushing our images to HuggingFace Hub

The first step would be to create an image folder locally and store the concept images you want to use in your fine tuning. For me, I used pictures of Fjords to generate weird and wacky landscapes, but you could use your favourite pet, or your child. Essentially, put images in a folder and load them up using the below Python script:

This will create our images in HuggingFace in our account under datasets.

Please bear in mind you must have a HuggingFace account and have registered your access token. This is simple and can be done here: https://huggingface.co/welcome.

To view your images, navigate to Datasets in HuggingFace and you will see them loaded: https://huggingface.co/datasets/StatsGary/dreambooth-hackathon-images.

Get your images from HuggingFace Hub

Once we have published them, we will need to pull them back into our project for later analysis. The way to do this is to do the below:

This will pull the images from HuggingFace and then I will use the load_dataset function to do a split for training on my images.

Visualise your images

I have created a supporting Dreambooth module that should be used with this project, as it simplifies a lot of the training code and extra utilities you will need to fine tune your model.

To create an image grid all you need to do is loaded in the function from dreambooth and it will allow you to visualise your images:

This will create an output, for your use case, as below:

Fine tuning the model

The dreambooth_train.py does all the hard work for you, but I thought it worth taking you through it step by step to concrete the knowkedge.

You will need to clone the supporting GitHub repository to allow you to work with the next set of examples. The repository with the dreambooth module you need can be found here: https://github.com/StatsGary/stable-diffusion-leeds-data-science and you can pull into your environment by using git clone https://github.com/StatsGary/stable-diffusion-leeds-data-science.git.

Pointers:

Once you have the repository open the dreambooth_train.py file, as this is the first thing you need to run.

Get your imports needed

As with all Python projects, the first stage is to gather your imports into your Python project.

Bring in your dreambooth_param.yml file

All of this code relies on you to change your settings in the supporting config file. The config file has two parts – train and evaluation config, so simply, before fine tuning, tweak the config with your settings and hit run on the training script.

What does the config encompass?

This allows you to alter the model backbone for stable diffusion and clip, but these don’t need to be changed. What does is the image store options and your output directories, the rest of the parameters can remain the same.

The Python train file loads this config and sets project variables to the config that has been passed to the script:

The parameters such as learning rate, max_train_steps and other parameters are our hyperparameters that we can tweak to get better model results.

Project setup steps

In the next set of steps we load in the dataset, set our concept name, create an instance prompt and then load in our tokenizer:

Loading our data with a PyTorch dataloader

The dataloader is the next step, this will take the images, and create batches and do other augmentation steps, such as resizing, tensor creation, scaling and other augmentations: https://pytorch.org/vision/stable/transforms.html.

In the training script, your simply create your training dataset, here it is train_dataset which takes your image, instance_prompt and the text-to-image tokenizer:

I have actually hidden some of what is going on her for you, so if you want to skip to the next heading, then go for it, otherwise stick around. The DreamBoothDataset resides in the Dreambooth module to simplify the process for you, but I will include the method for those that want to know:

The __init__ block takes in the parameters dataset, instance_prompt, tokenizer and the image size (size) as a default of 512. The following steps:

That is all the class initialisation block does. The dunder methods do two things, the first just gives the length of the dataset, the second:

Load in pretrained models

As we went through in depth earlier, stable diffusion is a multi-model approach to this and we need to load in the relevant components, luckily HuggingFace has all the pretrained models that you need for the task, this is what is happening in the next step:

Now let’s examine the training step, in the next section.

Training the model

The fun bit, we hit train, and watch the epochs count down.

This is a memory intensive modelling process, so you will need bitsandbytes to be working property. If errors arise when training, go to this repository and have a look at similar Issues people have had: https://github.com/TimDettmers/bitsandbytes.

It is simple, with the script I have provided, as I have created the PyTorch training loop for you, which involves creation of the random scheduler and all the other components. If you are inclined, check out the dreambooth.train module.

This training script takes in all the parameters from the config file, therefore you should not need to change anything in the training loop. A couple of tips:

The rest are all regular components of a neural network you have seen, such as the rate at which the model learns i.e. the model weights are updated.

The full script for the model training is included here:

To set the model running, once the config is updated, use python dreambooth_train.py in your VS Code terminal, or any terminal, and you will see the training process commence.

Push the model to the hub

When the training has finalised, we can then push the model to the HuggingFace Hub, for multiple people to be able to see and play with your model. The steps to do this are contained in the code below:

Here, all you need to do is add a description for your model card in HuggingFace, albeit you can edit the README.md file later on, the config takes care of the rest for you, as it picks up where the training model gets stored and loads that.

After running the dreambooth_push_to_hub.py script in the repository your model will appear on HuggingFace, in your account: https://huggingface.co/StatsGary/norweigen-fjords-fjords.

Inferencing the model on the hub

You will now have a model on the hub that can be used to perform Text-to-image inferencing, see screenshot:

This allows you to create all sorts of new images. There is also a script version of this and you can find out how to use that on the README of the supporting repository: https://github.com/StatsGary/stable-diffusion-leeds-data-science/blob/main/README.md.

Here is an example of some of the fun things I have generated with my model, having no knowledge of what a Fjord in Norway looked like before:

Where to get the code?

If you haven’t guessed already, as the repository has been linked several times, the code for this tutorial can be found here: https://github.com/StatsGary/dreambooth-fine-tuning-pytorch/tree/main.

I hope you have found this tutorial useful, it certainly helped me rank second in the Dreambooth competition that HuggingFace ran towards the end of 2022 and the start of this year in the landscape category:

Although the competition is over, you can still have fun generating your own images.

Peace!!!

To leave a comment for the author, please follow the link and comment on their blog: Python – Hutsons-hacks .

Want to share your content on python-bloggers? click here.
Exit mobile version