Merge branch 'development' of github.com:lstein/stable-diffusion into development
865
README.md
@ -1,7 +1,7 @@
|
||||
<h1 align='center'><b>Stable Diffusion Dream Script</b></h1>
|
||||
|
||||
<p align='center'>
|
||||
<img src="static/logo.png"/>
|
||||
<img src="docs/assets/logo.png"/>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@ -12,439 +12,91 @@
|
||||
<img src="https://img.shields.io/github/issues-pr/lstein/stable-diffusion?logo=GitHub&style=for-the-badge" alt="pull-requests"/>
|
||||
</p>
|
||||
|
||||
This is a fork of CompVis/stable-diffusion, the wonderful open source
|
||||
text-to-image generator. This fork supports:
|
||||
# **Stable Diffusion Dream Script**
|
||||
|
||||
1. An interactive command-line interface that accepts the same prompt
|
||||
and switches as the Discord bot.
|
||||
This is a fork of [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion), the open source text-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process.
|
||||
|
||||
2. A basic Web interface that allows you to run a local web server for
|
||||
generating images in your browser.
|
||||
_Note: This fork is rapidly evolving. Please use the [Issues](https://github.com/lstein/stable-diffusion/issues) tab to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
|
||||
|
||||
3. Support for img2img in which you provide a seed image to guide the
|
||||
image creation. (inpainting & masking coming soon)
|
||||
# **Table of Contents**
|
||||
|
||||
4. Preliminary inpainting support.
|
||||
|
||||
5. A notebook for running the code on Google Colab.
|
||||
|
||||
6. Upscaling and face fixing using the optional ESRGAN and GFPGAN
|
||||
packages.
|
||||
|
||||
7. Weighted subprompts for prompt tuning.
|
||||
|
||||
7. [Image variations](VARIATIONS.md) which allow you to systematically
|
||||
generate variations of an image you like and combine two or more
|
||||
images together to combine the best features of both.
|
||||
|
||||
9. Textual inversion for customization of the prompt language and images.
|
||||
|
||||
10. ...and more!
|
||||
|
||||
This fork is rapidly evolving, so use the Issues panel to report bugs
|
||||
and make feature requests, and check back periodically for
|
||||
improvements and bug fixes.
|
||||
|
||||
# Table of Contents
|
||||
|
||||
1. [Major Features](#features)
|
||||
2. [Changelog](#latest-changes)
|
||||
3. [Installation](#installation)
|
||||
1. [Linux](#linux)
|
||||
1. [Windows](#windows)
|
||||
1. [MacOS](README-Mac-MPS.md)
|
||||
1. [Installation](#installation)
|
||||
2. [Major Features](#features)
|
||||
3. [Changelog](#latest-changes)
|
||||
4. [Troubleshooting](#troubleshooting)
|
||||
5. [Contributing](#contributing)
|
||||
6. [Support](#support)
|
||||
|
||||
# Features
|
||||
# Installation
|
||||
|
||||
## Interactive command-line interface similar to the Discord bot
|
||||
This fork is supported across multiple platforms. You can find individual installation instructions below.
|
||||
|
||||
The _dream.py_ script, located in scripts/dream.py,
|
||||
provides an interactive interface to image generation similar to
|
||||
the "dream mothership" bot that Stable AI provided on its Discord
|
||||
server. Unlike the txt2img.py and img2img.py scripts provided in the
|
||||
original CompViz/stable-diffusion source code repository, the
|
||||
time-consuming initialization of the AI model
|
||||
initialization only happens once. After that image generation
|
||||
from the command-line interface is very fast.
|
||||
- ## [Linux](docs/installation/INSTALL_LINUX.md)
|
||||
- ## [Windows](docs/installation/INSTALL_WINDOWS.md)
|
||||
- ## [Macintosh](docs/installation/INSTALL_MAC.md)
|
||||
|
||||
The script uses the readline library to allow for in-line editing,
|
||||
command history (up and down arrows), autocompletion, and more. To help
|
||||
keep track of which prompts generated which images, the script writes a
|
||||
log file of image names and prompts to the selected output directory.
|
||||
In addition, as of version 1.02, it also writes the prompt into the PNG
|
||||
file's metadata where it can be retrieved using scripts/images2prompt.py
|
||||
## **Hardware Requirements**
|
||||
|
||||
The script is confirmed to work on Linux, Windows and Mac
|
||||
systems. Note that this script runs from the command-line or can be used
|
||||
as a Web application. The Web GUI is currently rudimentary, but a much
|
||||
better replacement is on its way.
|
||||
**System**
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
|
||||
* Initializing, be patient...
|
||||
Loading model from models/ldm/text2img-large/model.ckpt
|
||||
(...more initialization messages...)
|
||||
You wil need one of the following:
|
||||
|
||||
* Initialization done! Awaiting your command...
|
||||
dream> ashley judd riding a camel -n2 -s150
|
||||
Outputs:
|
||||
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
|
||||
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
|
||||
- An NVIDIA-based graphics card with 8 GB or more VRAM memory.
|
||||
- An Apple computer with an M1 chip.
|
||||
|
||||
dream> "there's a fly in my soup" -n6 -g
|
||||
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
|
||||
dream> q
|
||||
**Memory**
|
||||
|
||||
# this shows how to retrieve the prompt stored in the saved image's metadata
|
||||
(ldm) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
|
||||
00009.png: "ashley judd riding a camel" -s150 -S 416354203
|
||||
00010.png: "ashley judd riding a camel" -s150 -S 1362479620
|
||||
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
```
|
||||
- At least 12 GB Main Memory RAM.
|
||||
|
||||
<p align='center'>
|
||||
<img src="static/dream-py-demo.png"/>
|
||||
</p>
|
||||
**Disk**
|
||||
|
||||
The dream> prompt's arguments are pretty much identical to those used
|
||||
in the Discord bot, except you don't need to type "!dream" (it doesn't
|
||||
hurt if you do). A significant change is that creation of individual
|
||||
images is now the default unless --grid (-g) is given. For backward
|
||||
compatibility, the -i switch is recognized. For command-line help
|
||||
type -h (or --help) at the dream> prompt.
|
||||
|
||||
The script itself also recognizes a series of command-line switches
|
||||
that will change important global defaults, such as the directory for
|
||||
image outputs and the location of the model weight files.
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
You will need one of:
|
||||
|
||||
1. An NVIDIA-based graphics card with 8 GB or more of VRAM memory*.
|
||||
|
||||
2. An Apple computer with an M1 chip.**
|
||||
|
||||
3. At least 12 GB of main memory RAM.
|
||||
|
||||
4. At least 6 GB of free disk space for the machine learning model,
|
||||
python, and all its dependencies.
|
||||
|
||||
* If you are have a Nvidia 10xx series card (e.g. the 1080ti), please
|
||||
run the dream script in full-precision mode as shown below.
|
||||
|
||||
** Similarly, specify full-precision mode on Apple M1 hardware.
|
||||
|
||||
To run in full-precision mode, start dream.py with the
|
||||
--full_precision flag:
|
||||
|
||||
~~~~
|
||||
(ldm) ~/stable-diffusion$ python scripts/dream.py --full_precision
|
||||
~~~~
|
||||
|
||||
## Image-to-Image
|
||||
|
||||
This script also provides an img2img feature that lets you seed your
|
||||
creations with an initial drawing or photo. This is a really cool
|
||||
feature that tells stable diffusion to build the prompt on top of the
|
||||
image you provide, preserving the original's basic shape and
|
||||
layout. To use it, provide the --init_img option as shown here:
|
||||
|
||||
```
|
||||
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
The --init_img (-I) option gives the path to the seed
|
||||
picture. --strength (-f) controls how much the original will be
|
||||
modified, ranging from 0.0 (keep the original intact), to 1.0 (ignore
|
||||
the original completely). The default is 0.75, and ranges from
|
||||
0.25-0.75 give interesting results.
|
||||
|
||||
You may also pass a -v<count> option to generate count variants on the
|
||||
original image. This is done by passing the first generated image back
|
||||
into img2img the requested number of times. It generates interesting
|
||||
variants.
|
||||
|
||||
If the initial image contains transparent regions, then Stable
|
||||
Diffusion will only draw within the transparent regions, a process
|
||||
called "inpainting". However, for this to work correctly, the color
|
||||
information underneath the transparent needs to be preserved, not
|
||||
erased. See [Creating Transparent Images for
|
||||
Inpainting](#creating-transparent-images-for-inpainting) for details.
|
||||
|
||||
## Seamless Tiling
|
||||
|
||||
The seamless tiling mode causes generated images to seamlessly tile
|
||||
with itself. To use it, add the --seamless option when starting the
|
||||
script which will result in all generated images to tile, or for each
|
||||
dream> prompt as shown here:
|
||||
|
||||
```
|
||||
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
||||
```
|
||||
|
||||
## GFPGAN and Real-ESRGAN Support
|
||||
|
||||
The script also provides the ability to do face restoration and
|
||||
upscaling with the help of GFPGAN and Real-ESRGAN respectively.
|
||||
|
||||
To use the ability, clone the **[GFPGAN
|
||||
repository](https://github.com/TencentARC/GFPGAN)** and follow their
|
||||
installation instructions. By default, we expect GFPGAN to be
|
||||
installed in a 'GFPGAN' sibling directory. Be sure that the `"ldm"`
|
||||
conda environment is active as you install GFPGAN.
|
||||
|
||||
You can use the `--gfpgan_dir` argument with `dream.py` to set a
|
||||
custom path to your GFPGAN directory. _There are other GFPGAN related
|
||||
boot arguments if you wish to customize further._
|
||||
|
||||
You can install **Real-ESRGAN** by typing the following command.
|
||||
|
||||
```
|
||||
pip install realesrgan
|
||||
```
|
||||
|
||||
**Note: Internet connection needed:**
|
||||
Users whose GPU machines are isolated from the Internet (e.g. on a
|
||||
University cluster) should be aware that the first time you run
|
||||
dream.py with GFPGAN and Real-ESRGAN turned on, it will try to
|
||||
download model files from the Internet. To rectify this, you may run
|
||||
`python3 scripts/preload_models.py` after you have installed GFPGAN
|
||||
and all its dependencies.
|
||||
|
||||
**Usage**
|
||||
|
||||
You will now have access to two new prompt arguments.
|
||||
|
||||
**Upscaling**
|
||||
|
||||
`-U : <upscaling_factor> <upscaling_strength>`
|
||||
|
||||
The upscaling prompt argument takes two values. The first value is a
|
||||
scaling factor and should be set to either `2` or `4` only. This will
|
||||
either scale the image 2x or 4x respectively using different models.
|
||||
|
||||
You can set the scaling stength between `0` and `1.0` to control
|
||||
intensity of the of the scaling. This is handy because AI upscalers
|
||||
generally tend to smooth out texture details. If you wish to retain
|
||||
some of those for natural looking results, we recommend using values
|
||||
between `0.5 to 0.8`.
|
||||
|
||||
If you do not explicitly specify an upscaling_strength, it will
|
||||
default to 0.75.
|
||||
|
||||
**Face Restoration**
|
||||
|
||||
`-G : <gfpgan_strength>`
|
||||
|
||||
This prompt argument controls the strength of the face restoration
|
||||
that is being applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
|
||||
|
||||
You can use either one or both without any conflicts. In cases where
|
||||
you use both, the image will be first upscaled and then the face
|
||||
restoration process will be executed to ensure you get the highest
|
||||
quality facial features.
|
||||
|
||||
`--save_orig`
|
||||
|
||||
When you use either `-U` or `-G`, the final result you get is upscaled
|
||||
or face modified. If you want to save the original Stable Diffusion
|
||||
generation, you can use the `-save_orig` prompt argument to save the
|
||||
original unaffected version too.
|
||||
|
||||
**Example Usage**
|
||||
|
||||
```
|
||||
dream > superman dancing with a panda bear -U 2 0.6 -G 0.4
|
||||
```
|
||||
|
||||
This also works with img2img:
|
||||
|
||||
```
|
||||
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
|
||||
```
|
||||
- At least 6 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
||||
|
||||
**Note**
|
||||
|
||||
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid
|
||||
crashes and memory overloads during the Stable Diffusion process,
|
||||
these effects are applied after Stable Diffusion has completed its
|
||||
work.
|
||||
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the dream script in full-precision mode as shown below.
|
||||
|
||||
In single image generations, you will see the output right away but
|
||||
when you are using multiple iterations, the images will first be
|
||||
generated and then upscaled and face restored after that process is
|
||||
complete. While the image generation is taking place, you will still
|
||||
be able to preview the base images.
|
||||
Similarly, specify full-precision mode on Apple M1 hardware.
|
||||
|
||||
If you wish to stop during the image generation but want to upscale or
|
||||
face restore a particular generated image, pass it again with the same
|
||||
prompt and generated seed along with the `-U` and `-G` prompt
|
||||
arguments to perform those actions.
|
||||
|
||||
## Google Colab
|
||||
|
||||
Stable Diffusion AI Notebook: <a href="https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/Stable_Diffusion_AI_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <br>
|
||||
Open and follow instructions to use an isolated environment running Dream.<br>
|
||||
|
||||
Output example:
|
||||
![Colab Notebook](static/colab_notebook.png)
|
||||
|
||||
## Barebones Web Server
|
||||
|
||||
As of version 1.10, this distribution comes with a bare bones web
|
||||
server (see screenshot). To use it, run the _dream.py_ script by
|
||||
adding the **--web** option.
|
||||
To run in full-precision mode, start `dream.py` with the `--full_precision` flag:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
|
||||
(ldm) ~/stable-diffusion$ python scripts/dream.py --full_precision
|
||||
```
|
||||
|
||||
You can then connect to the server by pointing your web browser at
|
||||
http://localhost:9090, or to the network name or IP address of the server.
|
||||
# Features
|
||||
|
||||
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for
|
||||
contributing this code, and to [dagf2101](https://github.com/dagf2101)
|
||||
for refining it.
|
||||
## **Major Features**
|
||||
|
||||
![Dream Web Server](static/dream_web_server.png)
|
||||
- ## [Interactive Command Line Interface](docs/features/CLI.md)
|
||||
|
||||
## Reading Prompts from a File
|
||||
- ## [Image To Image](docs/features/IMG2IMG.md)
|
||||
|
||||
You can automate dream.py by providing a text file with the prompts
|
||||
you want to run, one line per prompt. The text file must be composed
|
||||
with a text editor (e.g. Notepad) and not a word processor. Each line
|
||||
should look like what you would type at the dream> prompt:
|
||||
- ## [GFPGAN and Real-ESRGAN Support](docs/features/UPSCALE.md)
|
||||
|
||||
```
|
||||
a beautiful sunny day in the park, children playing -n4 -C10
|
||||
stormy weather on a mountain top, goats grazing -s100
|
||||
innovative packaging for a squid's dinner -S137038382
|
||||
```
|
||||
- ## [Seamless Tiling](docs/features/OTHER.md#seamless-tiling)
|
||||
|
||||
Then pass this file's name to dream.py when you invoke it:
|
||||
- ## [Google Colab](docs/features/OTHER.md#google-colab)
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
|
||||
```
|
||||
- ## [Web Server](docs/features/WEB.md)
|
||||
|
||||
You may read a series of prompts from standard input by providing a filename of "-":
|
||||
- ## [Reading Prompts From File](docs/features/OTHER.md#reading-prompts-from-a-file)
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
|
||||
```
|
||||
- ## [Shortcut: Reusing Seeds](docs/features/OTHER.md#shortcuts-reusing-seeds)
|
||||
|
||||
## Shortcut for reusing seeds from the previous command
|
||||
- ## [Weighted Prompts](docs/features/OTHER.md#weighted-prompts)
|
||||
|
||||
Since it is so common to reuse seeds while refining a prompt, there is
|
||||
now a shortcut as of version 1.11. Provide a **-S** (or **--seed**)
|
||||
switch of -1 to use the seed of the most recent image generated. If
|
||||
you produced multiple images with the **-n** switch, then you can go
|
||||
back further using -2, -3, etc. up to the first image generated by the
|
||||
previous command. Sorry, but you can't go back further than one
|
||||
command.
|
||||
- ## [Variations](docs/features/VARIATIONS.md)
|
||||
|
||||
Here's an example of using this to do a quick refinement. It also
|
||||
illustrates using the new **-G** switch to turn on upscaling and
|
||||
face enhancement (see previous section):
|
||||
- ## [Personalizing Text-to-Image Generation](docs/features/TEXTUAL_INVERSION.md)
|
||||
|
||||
```
|
||||
dream> a cute child playing hopscotch -G0.5
|
||||
[...]
|
||||
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
- ## [Simplified API for text to image generation](docs/features/OTHER.md#simplified-api)
|
||||
|
||||
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
|
||||
dream> a cute child playing hopscotch -G1.0 -s100 -S -1
|
||||
reusing previous seed 3498014304
|
||||
[...]
|
||||
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
```
|
||||
## **Other Features**
|
||||
|
||||
## Weighted Prompts
|
||||
- ### [Creating Transparent Regions for Inpainting](docs/features/INPAINTING.md#creating-transparent-regions-for-inpainting)
|
||||
|
||||
You may weight different sections of the prompt to tell the sampler to attach different levels of
|
||||
priority to them, by adding :(number) to the end of the section you wish to up- or downweight.
|
||||
For example consider this prompt:
|
||||
|
||||
```
|
||||
tabby cat:0.25 white duck:0.75 hybrid
|
||||
```
|
||||
|
||||
This will tell the sampler to invest 25% of its effort on the tabby
|
||||
cat aspect of the image and 75% on the white duck aspect
|
||||
(surprisingly, this example actually works). The prompt weights can
|
||||
use any combination of integers and floating point numbers, and they
|
||||
do not need to add up to 1.
|
||||
|
||||
## Personalizing Text-to-Image Generation
|
||||
|
||||
You may personalize the generated images to provide your own styles or objects by training a new LDM checkpoint
|
||||
and introducing a new vocabulary to the fixed model.
|
||||
|
||||
To train, prepare a folder that contains images sized at 512x512 and execute the following:
|
||||
|
||||
WINDOWS: As the default backend is not available on Windows, if you're using that platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND=gloo`
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
|
||||
-t \
|
||||
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
|
||||
-n my_cat \
|
||||
--gpus 0, \
|
||||
--data_root D:/textual-inversion/my_cat \
|
||||
--init_word 'cat'
|
||||
```
|
||||
|
||||
During the training process, files will be created in /logs/[project][time][project]/
|
||||
where you can see the process.
|
||||
|
||||
conditioning\* contains the training prompts
|
||||
inputs, reconstruction the input images for the training epoch
|
||||
samples, samples scaled for a sample of the prompt and one with the init word provided
|
||||
|
||||
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
|
||||
|
||||
Note: According to the associated paper, the optimal number of images
|
||||
is 3-5. Your model may not converge if you use more images than that.
|
||||
|
||||
Training will run indefinately, but you may wish to stop it before the
|
||||
heat death of the universe, when you find a low loss epoch or around
|
||||
~5000 iterations.
|
||||
|
||||
Once the model is trained, specify the trained .pt file when starting
|
||||
dream using
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt --full_precision
|
||||
```
|
||||
|
||||
Then, to utilize your subject at the dream prompt
|
||||
|
||||
```
|
||||
dream> "a photo of *"
|
||||
```
|
||||
|
||||
this also works with image2image
|
||||
|
||||
```
|
||||
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
It's also possible to train multiple tokens (modify the placeholder string in configs/stable-diffusion/v1-finetune.yaml) and combine LDM checkpoints using:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/merge_embeddings.py \
|
||||
--manager_ckpts /path/to/first/embedding.pt /path/to/second/embedding.pt [...] \
|
||||
--output_path /path/to/output/embedding.pt
|
||||
```
|
||||
|
||||
Credit goes to @rinongal and the repository located at
|
||||
https://github.com/rinongal/textual_inversion Please see the
|
||||
repository and associated paper for details and limitations.
|
||||
- ### [Preload Models](docs/features/OTHER.md#preload-models)
|
||||
|
||||
# Latest Changes
|
||||
|
||||
@ -454,7 +106,7 @@ repository and associated paper for details and limitations.
|
||||
|
||||
- v1.13 (3 September 2022
|
||||
|
||||
- Support image variations (see [VARIATIONS](VARIATIONS.md) ([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
|
||||
- Support image variations (see [VARIATIONS](docs/features/VARIATIONS.md) ([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
|
||||
- Supports a Google Colab notebook for a standalone server running on Google hardware [Arturo Mendivil](https://github.com/artmen1516)
|
||||
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling [Kevin Gibbons](https://github.com/bakkot)
|
||||
- WebUI supports incremental display of in-progress images during generation [Kevin Gibbons](https://github.com/bakkot)
|
||||
@ -465,418 +117,22 @@ repository and associated paper for details and limitations.
|
||||
- Works on M1 Apple hardware.
|
||||
- Multiple bug fixes.
|
||||
|
||||
For older changelogs, please visit **[CHANGELOGS](CHANGELOG.md)**.
|
||||
|
||||
# Installation
|
||||
|
||||
There are separate installation walkthroughs for [Linux](#linux), [Windows](#windows) and [Macintosh](#Macintosh)
|
||||
|
||||
## Linux
|
||||
|
||||
1. You will need to install the following prerequisites if they are not already available. Use your
|
||||
operating system's preferred installer
|
||||
|
||||
- Python (version 3.8.5 recommended; higher may work)
|
||||
- git
|
||||
|
||||
2. Install the Python Anaconda environment manager.
|
||||
|
||||
```
|
||||
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
|
||||
```
|
||||
|
||||
After installing anaconda, you should log out of your system and log back in. If the installation
|
||||
worked, your command prompt will be prefixed by the name of the current anaconda environment, "(base)".
|
||||
|
||||
3. Copy the stable-diffusion source code from GitHub:
|
||||
|
||||
```
|
||||
(base) ~$ git clone https://github.com/lstein/stable-diffusion.git
|
||||
```
|
||||
|
||||
This will create stable-diffusion folder where you will follow the rest of the steps.
|
||||
|
||||
4. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
|
||||
|
||||
```
|
||||
(base) ~$ cd stable-diffusion
|
||||
(base) ~/stable-diffusion$
|
||||
```
|
||||
|
||||
5. Use anaconda to copy necessary python packages, create a new python environment named "ldm",
|
||||
and activate the environment.
|
||||
|
||||
```
|
||||
(base) ~/stable-diffusion$ conda env create -f environment.yaml
|
||||
(base) ~/stable-diffusion$ conda activate ldm
|
||||
(ldm) ~/stable-diffusion$
|
||||
```
|
||||
|
||||
After these steps, your command prompt will be prefixed by "(ldm)" as shown above.
|
||||
|
||||
Note: If necessary, you can update the environment via this command:
|
||||
|
||||
```sh
|
||||
(ldm) ~/stable-diffusion$ conda env update --file environment.yaml
|
||||
```
|
||||
|
||||
6. Load a couple of small machine-learning models required by stable diffusion:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
|
||||
```
|
||||
|
||||
Note that this step is necessary because I modified the original
|
||||
just-in-time model loading scheme to allow the script to work on GPU
|
||||
machines that are not internet connected. See [Workaround for machines with limited internet connectivity](#workaround-for-machines-with-limited-internet-connectivity)
|
||||
|
||||
7. Now you need to install the weights for the stable diffusion model.
|
||||
|
||||
For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
|
||||
Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
|
||||
You may be asked to sign a license agreement at this point.
|
||||
|
||||
Click on "Files and versions" near the top of the page, and then click on the file named "sd-v1-4.ckpt". You'll be taken
|
||||
to a page that prompts you to click the "download" link. Save the file somewhere safe on your local machine.
|
||||
|
||||
Now run the following commands from within the stable-diffusion directory. This will create a symbolic
|
||||
link from the stable-diffusion model.ckpt file, to the true location of the sd-v1-4.ckpt file.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
|
||||
(ldm) ~/stable-diffusion$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
|
||||
```
|
||||
|
||||
8. Start generating images!
|
||||
|
||||
```
|
||||
# for the pre-release weights use the -l or --liaon400m switch
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -l
|
||||
|
||||
# for the post-release weights do not use the switch
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py
|
||||
|
||||
# for additional configuration switches and arguments, use -h or --help
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -h
|
||||
```
|
||||
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the "stable-diffusion"
|
||||
directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple ModuleNotFound errors.
|
||||
|
||||
### Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ git pull
|
||||
```
|
||||
|
||||
This will bring your local copy into sync with the remote one.
|
||||
|
||||
## Windows
|
||||
|
||||
### Notebook install (semi-automated)
|
||||
|
||||
We have a
|
||||
[Jupyter notebook](https://github.com/lstein/stable-diffusion/blob/main/Stable-Diffusion-local-Windows.ipynb)
|
||||
with cell-by-cell installation steps. It will download the code in this repo as
|
||||
one of the steps, so instead of cloning this repo, simply download the notebook
|
||||
from the link above and load it up in VSCode (with the
|
||||
appropriate extensions installed)/Jupyter/JupyterLab and start running the cells one-by-one.
|
||||
|
||||
Note that you will need NVIDIA drivers, Python 3.10, and Git installed
|
||||
beforehand - simplified
|
||||
[step-by-step instructions](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
are available in the wiki (you'll only need steps 1, 2, & 3 ).
|
||||
|
||||
### Manual installs
|
||||
|
||||
#### pip
|
||||
|
||||
See
|
||||
[Easy-peasy Windows install](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
in the wiki
|
||||
|
||||
#### Conda
|
||||
|
||||
1. Install Anaconda3 (miniconda3 version) from here: https://docs.anaconda.com/anaconda/install/windows/
|
||||
|
||||
2. Install Git from here: https://git-scm.com/download/win
|
||||
|
||||
3. Launch Anaconda from the Windows Start menu. This will bring up a command window. Type all the remaining commands in this window.
|
||||
|
||||
4. Run the command:
|
||||
|
||||
```
|
||||
git clone https://github.com/lstein/stable-diffusion.git
|
||||
```
|
||||
|
||||
This will create stable-diffusion folder where you will follow the rest of the steps.
|
||||
|
||||
5. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
|
||||
|
||||
```
|
||||
cd stable-diffusion
|
||||
```
|
||||
|
||||
6. Run the following two commands:
|
||||
|
||||
```
|
||||
conda env create -f environment.yaml (step 6a)
|
||||
conda activate ldm (step 6b)
|
||||
```
|
||||
|
||||
This will install all python requirements and activate the "ldm" environment which sets PATH and other environment variables properly.
|
||||
|
||||
7. Run the command:
|
||||
|
||||
```
|
||||
python scripts\preload_models.py
|
||||
```
|
||||
|
||||
This installs several machine learning models that stable diffusion
|
||||
requires. (Note that this step is required. I created it because some people
|
||||
are using GPU systems that are behind a firewall and the models can't be
|
||||
downloaded just-in-time)
|
||||
|
||||
8. Now you need to install the weights for the big stable diffusion model.
|
||||
|
||||
For running with the released weights, you will first need to set up
|
||||
an acount with Hugging Face (https://huggingface.co). Use your
|
||||
credentials to log in, and then point your browser at
|
||||
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original. You
|
||||
may be asked to sign a license agreement at this point.
|
||||
|
||||
Click on "Files and versions" near the top of the page, and then click
|
||||
on the file named "sd-v1-4.ckpt". You'll be taken to a page that
|
||||
prompts you to click the "download" link. Now save the file somewhere
|
||||
safe on your local machine. The weight file is >4 GB in size, so
|
||||
downloading may take a while.
|
||||
|
||||
Now run the following commands from **within the stable-diffusion
|
||||
directory** to copy the weights file to the right place:
|
||||
|
||||
```
|
||||
mkdir -p models\ldm\stable-diffusion-v1
|
||||
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
|
||||
```
|
||||
|
||||
Please replace "C:\path\to\sd-v1.4.ckpt" with the correct path to wherever
|
||||
you stashed this file. If you prefer not to copy or move the .ckpt file,
|
||||
you may instead create a shortcut to it from within
|
||||
"models\ldm\stable-diffusion-v1\".
|
||||
|
||||
9. Start generating images!
|
||||
|
||||
```
|
||||
# for the pre-release weights
|
||||
python scripts\dream.py -l
|
||||
|
||||
# for the post-release weights
|
||||
python scripts\dream.py
|
||||
```
|
||||
|
||||
10. Subsequently, to relaunch the script, first activate the Anaconda
|
||||
command window (step 3), enter the stable-diffusion directory (step 5,
|
||||
"cd \path\to\stable-diffusion"), run "conda activate ldm" (step 6b),
|
||||
and then launch the dream script (step 9).
|
||||
|
||||
**Note:** Tildebyte has written an alternative ["Easy peasy Windows
|
||||
install"](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
which uses the Windows Powershell and pew. If you are having trouble
|
||||
with Anaconda on Windows, give this a try (or try it first!)
|
||||
|
||||
### Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the "git clone"
|
||||
method (step 5) to download the stable-diffusion directory, then to
|
||||
update to the latest and greatest version, launch the Anaconda window,
|
||||
enter "stable-diffusion", and type:
|
||||
|
||||
```
|
||||
git pull
|
||||
```
|
||||
|
||||
This will bring your local copy into sync with the remote one.
|
||||
|
||||
## Macintosh
|
||||
|
||||
See [README-Mac-MPS](README-Mac-MPS.md) for instructions.
|
||||
|
||||
# Simplified API for text to image generation
|
||||
|
||||
For programmers who wish to incorporate stable-diffusion into other
|
||||
products, this repository includes a simplified API for text to image
|
||||
generation, which lets you create images from a prompt in just three
|
||||
lines of code:
|
||||
|
||||
```
|
||||
from ldm.simplet2i import T2I
|
||||
model = T2I()
|
||||
outputs = model.txt2img("a unicorn in manhattan")
|
||||
```
|
||||
|
||||
Outputs is a list of lists in the format [[filename1,seed1],[filename2,seed2]...]
|
||||
Please see ldm/simplet2i.py for more information. A set of example scripts is
|
||||
coming RSN.
|
||||
|
||||
# Workaround for machines with limited internet connectivity
|
||||
|
||||
My development machine is a GPU node in a high-performance compute
|
||||
cluster which has no connection to the internet. During model
|
||||
initialization, stable-diffusion tries to download the Bert tokenizer
|
||||
and a file needed by the kornia library. This obviously didn't work
|
||||
for me.
|
||||
|
||||
To work around this, I have modified ldm/modules/encoders/modules.py
|
||||
to look for locally cached Bert files rather than attempting to
|
||||
download them. For this to work, you must run
|
||||
"scripts/preload_models.py" once from an internet-connected machine
|
||||
prior to running the code on an isolated one. This assumes that both
|
||||
machines share a common network-mounted filesystem with a common
|
||||
.cache directory.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/preload_models.py
|
||||
preloading bert tokenizer...
|
||||
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 226k/226k [00:00<00:00, 2.79MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 455k/455k [00:00<00:00, 4.36MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 570/570 [00:00<00:00, 477kB/s]
|
||||
...success
|
||||
preloading kornia requirements...
|
||||
Downloading: "https://github.com/DagnyT/hardnet/raw/master/pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth" to /u/lstein/.cache/torch/hub/checkpoints/checkpoint_liberty_with_aug.pth
|
||||
100%|███████████████████████████████████████████████| 5.10M/5.10M [00:00<00:00, 101MB/s]
|
||||
...success
|
||||
```
|
||||
For older changelogs, please visit **[CHANGELOGS](docs/CHANGELOG.md)**.
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
Here are a few common installation problems and their solutions. Often
|
||||
these are caused by incomplete installations or crashes during the
|
||||
install process.
|
||||
|
||||
- PROBLEM: During "conda env create -f environment.yaml", conda
|
||||
hangs indefinitely.
|
||||
|
||||
- SOLUTION: Enter the stable-diffusion directory and completely
|
||||
remove the "src" directory and all its contents. The safest way
|
||||
to do this is to enter the stable-diffusion directory and
|
||||
give the command "git clean -f". If this still doesn't fix
|
||||
the problem, try "conda clean -all" and then restart at the
|
||||
"conda env create" step.
|
||||
|
||||
---
|
||||
|
||||
- PROBLEM: dream.py crashes with the complaint that it can't find
|
||||
ldm.simplet2i.py. Or it complains that function is being passed
|
||||
incorrect parameters.
|
||||
|
||||
- SOLUTION: Reinstall the stable diffusion modules. Enter the
|
||||
stable-diffusion directory and give the command "pip install -e ."
|
||||
|
||||
---
|
||||
|
||||
- PROBLEM: dream.py dies, complaining of various missing modules, none
|
||||
of which starts with "ldm".
|
||||
|
||||
- SOLUTION: From within the stable-diffusion directory, run "conda env
|
||||
update -f environment.yaml" This is also frequently the solution to
|
||||
complaints about an unknown function in a module.
|
||||
|
||||
---
|
||||
|
||||
- PROBLEM: There's a feature or bugfix in the Stable Diffusion GitHub
|
||||
that you want to try out.
|
||||
|
||||
- SOLUTION: If the fix/feature is on the "main" branch, enter the stable-diffusion
|
||||
directory and do a "git pull". Usually this will be sufficient, but if
|
||||
you start to see errors about missing or incorrect modules, use the
|
||||
command "pip install -e ." and/or "conda env update -f environment.yaml"
|
||||
(These commands won't break anything.)
|
||||
|
||||
- If the feature/fix is on a branch (e.g. "foo-bugfix"), the recipe is similar, but
|
||||
do a "git pull <name of branch>".
|
||||
|
||||
- If the feature/fix is in a pull request that has not yet been made
|
||||
part of the main branch or a feature/bugfix branch, then from the page
|
||||
for the desired pull request, look for the line at the top that reads
|
||||
"xxxx wants to merge xx commits into lstein:main from YYYYYY". Copy
|
||||
the URL in YYYY. It should have the format
|
||||
https://github.com/<name of contributor>/stable-diffusion/tree/<name
|
||||
of branch>
|
||||
|
||||
- Then **go to the directory above stable-diffusion**, and rename the
|
||||
directory to "stable-diffusion.lstein", "stable-diffusion.old", or
|
||||
whatever. You can then git clone the branch that contains the
|
||||
pull request:
|
||||
|
||||
```
|
||||
git clone https://github.com/<name of contributor>/stable-diffusion/tree/<name
|
||||
of branch>
|
||||
```
|
||||
|
||||
You will need to go through the install procedure again, but it should
|
||||
be fast because all the dependencies are already loaded.
|
||||
|
||||
# Creating Transparent Regions for Inpainting
|
||||
|
||||
Inpainting is really cool. To do it, you start with an initial image
|
||||
and use a photoeditor to make one or more regions transparent
|
||||
(i.e. they have a "hole" in them). You then provide the path to this
|
||||
image at the dream> command line using the -I switch. Stable Diffusion
|
||||
will only paint within the transparent region.
|
||||
|
||||
There's a catch. In the current implementation, you have to prepare
|
||||
the initial image correctly so that the underlying colors are
|
||||
preserved under the transparent area. Many imaging editing
|
||||
applications will by default erase the color information under the
|
||||
transparent pixels and replace them with white or black, which will
|
||||
lead to suboptimal inpainting. You also must take care to export the
|
||||
PNG file in such a way that the color information is preserved.
|
||||
|
||||
If your photoeditor is erasing the underlying color information,
|
||||
dream.py will give you a big fat warning. If you can't find a way to
|
||||
coax your photoeditor to retain color values under transparent areas,
|
||||
then you can combine the -I and -M switches to provide both the
|
||||
original unedited image and the masked (partially transparent) image:
|
||||
|
||||
~~~~
|
||||
dream> man with cat on shoulder -I./images/man.png -M./images/man-transparent.png
|
||||
~~~~
|
||||
|
||||
We are hoping to get rid of the need for this workaround in an
|
||||
upcoming release.
|
||||
|
||||
## Recipe for GIMP
|
||||
|
||||
GIMP is a popular Linux photoediting tool.
|
||||
|
||||
1. Open image in GIMP.
|
||||
2. Layer->Transparency->Add Alpha Channel
|
||||
2. Use lasoo tool to select region to mask
|
||||
3. Choose Select -> Float to create a floating selection
|
||||
4. Open the Layers toolbar (^L) and select "Floating Selection"
|
||||
5. Set opacity to 0%
|
||||
6. Export as PNG
|
||||
7. In the export dialogue, Make sure the "Save colour values from
|
||||
transparent pixels" checkbox is selected.
|
||||
Please check out our **[Q&A](docs/help/TROUBLESHOOT.md)** to get solutions for common installation problems and other issues.
|
||||
|
||||
# Contributing
|
||||
|
||||
Anyone who wishes to contribute to this project, whether
|
||||
documentation, features, bug fixes, code cleanup, testing, or code
|
||||
reviews, is very much encouraged to do so. If you are unfamiliar with
|
||||
how to contribute to GitHub projects, here is a [Getting Started
|
||||
Guide](https://opensource.com/article/19/7/create-pull-request-github).
|
||||
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with
|
||||
how to contribute to GitHub projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
|
||||
|
||||
A full set of contribution guidelines, along with templates, are in
|
||||
progress, but for now the most important thing is to **make your pull
|
||||
request against the "development" branch**, and not against
|
||||
"main". This will help keep public breakage to a minimum and will
|
||||
allow you to propose more radical changes.
|
||||
A full set of contribution guidelines, along with templates, are in progress, but for now the most important thing is to **make your pull request against the "development" branch**, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical changes.
|
||||
|
||||
## **Contributors**
|
||||
|
||||
This fork is a combined effort of various people from across the world. [Check out the list of all these amazing people](docs/CONTRIBUTORS.md). We thank them for their time, hard work and effort.
|
||||
|
||||
# Support
|
||||
|
||||
@ -884,22 +140,9 @@ For support,
|
||||
please use this repository's GitHub Issues tracking service. Feel free
|
||||
to send me an email if you use and like the script.
|
||||
|
||||
_Original Author:_ Lincoln D. Stein <lincoln.stein@gmail.com>
|
||||
|
||||
_Contributions by:_
|
||||
[Peter Kowalczyk](https://github.com/slix), [Henry Harrison](https://github.com/hwharrison),
|
||||
[xraxra](https://github.com/xraxra), [bmaltais](https://github.com/bmaltais), [Sean McLellan](https://github.com/Oceanswave),
|
||||
[nicolai256](https://github.com/nicolai256), [Benjamin Warner](https://github.com/warner-benjamin),
|
||||
[tildebyte](https://github.com/tildebyte),[yunsaki](https://github.com/yunsaki), [James Reynolds][https://github.com/magnusviri],
|
||||
[Tesseract Cat](https://github.com/TesseractCat), and many more!
|
||||
|
||||
(If you have contributed and don't see your name on the list of
|
||||
contributors, please let lstein know about the omission, or make a
|
||||
pull request)
|
||||
|
||||
Original portions of the software are Copyright (c) 2020 Lincoln D. Stein (https://github.com/lstein)
|
||||
|
||||
# Further Reading
|
||||
|
||||
Please see the original README for more information on this software
|
||||
and underlying algorithm, located in the file [README-CompViz.md](README-CompViz.md).
|
||||
and underlying algorithm, located in the file [README-CompViz.md](docs/README-CompViz.md).
|
||||
|
@ -134,4 +134,4 @@
|
||||
|
||||
## Links
|
||||
|
||||
- **[Read Me](readme.md)**
|
||||
- **[Read Me](../readme.md)**
|
61
docs/CONTRIBUTORS.md
Normal file
@ -0,0 +1,61 @@
|
||||
# Contributors
|
||||
|
||||
The list of all the amazing people who have contributed to the various features that you get to experience in this fork.
|
||||
|
||||
We thank them for all of their time and hard work.
|
||||
|
||||
_Original Author:_
|
||||
|
||||
- Lincoln D. Stein <lincoln.stein@gmail.com>
|
||||
|
||||
_Contributions by:_
|
||||
|
||||
- [Sean McLellan](https://github.com/Oceanswave)
|
||||
- [Kevin Gibbons](https://github.com/bakkot)
|
||||
- [Tesseract Cat](https://github.com/TesseractCat)
|
||||
- [blessedcoolant](https://github.com/blessedcoolant)
|
||||
- [David Ford](https://github.com/david-ford)
|
||||
- [yunsaki](https://github.com/yunsaki)
|
||||
- [James Reynolds](https://github.com/magnusviri)
|
||||
- [David Wager](https://github.com/maddavid123)
|
||||
- [Jason Toffaletti](https://github.com/toffaletti)
|
||||
- [tildebyte](https://github.com/tildebyte)
|
||||
- [Cragin Godley](https://github.com/cgodley)
|
||||
- [BlueAmulet](https://github.com/BlueAmulet)
|
||||
- [Benjamin Warner](https://github.com/warner-benjamin)
|
||||
- [Cora Johnson-Roberson](https://github.com/corajr)
|
||||
- [veprogames](https://github.com/veprogames)
|
||||
- [JigenD](https://github.com/JigenD)
|
||||
- [Niek van der Maas](https://github.com/Niek)
|
||||
- [Henry van Megen](https://github.com/hvanmegen)
|
||||
- [Håvard Gulldahl](https://github.com/havardgulldahl)
|
||||
- [greentext2](https://github.com/greentext2)
|
||||
- [Simon Vans-Colina](https://github.com/simonvc)
|
||||
- [Gabriel Rotbart](https://github.com/gabrielrotbart)
|
||||
- [Eric Khun](https://github.com/erickhun)
|
||||
- [Brent Ozar](https://github.com/BrentOzar)
|
||||
- [nderscore](https://github.com/nderscore)
|
||||
- [Mikhail Tishin](https://github.com/tishin)
|
||||
- [Tom Elovi Spruce](https://github.com/ilovecomputers)
|
||||
- [spezialspezial](https://github.com/spezialspezial)
|
||||
- [Yosuke Shinya](https://github.com/shinya7y)
|
||||
- [Andy Pilate](https://github.com/Cubox)
|
||||
- [Muhammad Usama](https://github.com/SMUsamaShah)
|
||||
- [Arturo Mendivil](https://github.com/artmen1516)
|
||||
- [Paul Sajna](https://github.com/sajattack)
|
||||
- [Samuel Husso](https://github.com/shusso)
|
||||
- [nicolai256](https://github.com/nicolai256)
|
||||
|
||||
_Original CompVis Authors:_
|
||||
|
||||
- [Robin Rombach](https://github.com/rromb)
|
||||
- [Patrick von Platen](https://github.com/patrickvonplaten)
|
||||
- [ablattmann](https://github.com/ablattmann)
|
||||
- [Patrick Esser](https://github.com/pesser)
|
||||
- [owenvincent](https://github.com/owenvincent)
|
||||
- [apolinario](https://github.com/apolinario)
|
||||
- [Charles Packer](https://github.com/cpacker)
|
||||
|
||||
---
|
||||
|
||||
_If you have contributed and don't see your name on the list of contributors, please let one of the collaborators know about the omission, or feel free to make a pull request._
|
@ -1,5 +1,6 @@
|
||||
# Original README from CompViz/stable-diffusion
|
||||
*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:*
|
||||
|
||||
_Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:_
|
||||
|
||||
[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)<br/>
|
||||
[Robin Rombach](https://github.com/rromb)\*,
|
||||
@ -12,16 +13,15 @@
|
||||
|
||||
which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our [Project page](https://ommer-lab.com/research/latent-diffusion-models/).
|
||||
|
||||
![txt2img-stable2](assets/stable-samples/txt2img/merged-0006.png)
|
||||
![txt2img-stable2](../assets/stable-samples/txt2img/merged-0006.png)
|
||||
[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion
|
||||
model.
|
||||
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
|
||||
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
|
||||
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
|
||||
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
|
||||
this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.
|
||||
With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.
|
||||
See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).
|
||||
|
||||
|
||||
## Requirements
|
||||
|
||||
A suitable [conda](https://conda.io/) environment named `ldm` can be created
|
||||
@ -44,16 +44,16 @@ pip install -e .
|
||||
|
||||
Stable Diffusion v1 refers to a specific configuration of the model
|
||||
architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet
|
||||
and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and
|
||||
and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and
|
||||
then finetuned on 512x512 images.
|
||||
|
||||
*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present
|
||||
in its training data.
|
||||
\*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present
|
||||
in its training data.
|
||||
Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).
|
||||
Research into the safe deployment of general text-to-image models is an ongoing effort. To prevent misuse and harm, we currently provide access to the checkpoints only for [academic research purposes upon request](https://stability.ai/academia-access-form).
|
||||
**This is an experiment in safe and community-driven publication of a capable and general text-to-image model. We are working on a public release with a more permissive license that also incorporates ethical considerations.***
|
||||
**This is an experiment in safe and community-driven publication of a capable and general text-to-image model. We are working on a public release with a more permissive license that also incorporates ethical considerations.\***
|
||||
|
||||
[Request access to Stable Diffusion v1 checkpoints for academic research](https://stability.ai/academia-access-form)
|
||||
[Request access to Stable Diffusion v1 checkpoints for academic research](https://stability.ai/academia-access-form)
|
||||
|
||||
### Weights
|
||||
|
||||
@ -64,36 +64,37 @@ which were trained as follows,
|
||||
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
|
||||
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
|
||||
515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
|
||||
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
|
||||
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
|
||||
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
||||
|
||||
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
||||
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
||||
steps show the relative improvements of the checkpoints:
|
||||
![sd evaluation results](assets/v1-variants-scores.jpg)
|
||||
|
||||
|
||||
![sd evaluation results](../assets/v1-variants-scores.jpg)
|
||||
|
||||
### Text-to-Image with Stable Diffusion
|
||||
![txt2img-stable2](assets/stable-samples/txt2img/merged-0005.png)
|
||||
![txt2img-stable2](assets/stable-samples/txt2img/merged-0007.png)
|
||||
|
||||
![txt2img-stable2](../assets/stable-samples/txt2img/merged-0005.png)
|
||||
![txt2img-stable2](../assets/stable-samples/txt2img/merged-0007.png)
|
||||
|
||||
Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder.
|
||||
|
||||
|
||||
#### Sampling Script
|
||||
|
||||
After [obtaining the weights](#weights), link them
|
||||
|
||||
```
|
||||
mkdir -p models/ldm/stable-diffusion-v1/
|
||||
ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt
|
||||
```
|
||||
and sample with
|
||||
```
|
||||
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
||||
ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt
|
||||
```
|
||||
|
||||
By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
|
||||
and sample with
|
||||
|
||||
```
|
||||
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
||||
```
|
||||
|
||||
By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
|
||||
and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`).
|
||||
|
||||
```commandline
|
||||
@ -131,73 +132,72 @@ optional arguments:
|
||||
evaluate at this precision
|
||||
|
||||
```
|
||||
Note: The inference config for all v1 versions is designed to be used with EMA-only checkpoints.
|
||||
|
||||
Note: The inference config for all v1 versions is designed to be used with EMA-only checkpoints.
|
||||
For this reason `use_ema=False` is set in the configuration, otherwise the code will try to switch from
|
||||
non-EMA to EMA weights. If you want to examine the effect of EMA vs no EMA, we provide "full" checkpoints
|
||||
which contain both types of weights. For these, `use_ema=False` will load and use the non-EMA weights.
|
||||
|
||||
|
||||
#### Diffusers Integration
|
||||
|
||||
Another way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers)
|
||||
|
||||
```py
|
||||
# make sure you're logged in with `huggingface-cli login`
|
||||
from torch import autocast
|
||||
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
|
||||
|
||||
pipe = StableDiffusionPipeline.from_pretrained(
|
||||
"CompVis/stable-diffusion-v1-3-diffusers",
|
||||
"CompVis/stable-diffusion-v1-3-diffusers",
|
||||
use_auth_token=True
|
||||
)
|
||||
|
||||
prompt = "a photo of an astronaut riding a horse on mars"
|
||||
with autocast("cuda"):
|
||||
image = pipe(prompt)["sample"][0]
|
||||
|
||||
image = pipe(prompt)["sample"][0]
|
||||
|
||||
image.save("astronaut_rides_horse.png")
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Image Modification with Stable Diffusion
|
||||
|
||||
By using a diffusion-denoising mechanism as first proposed by [SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different
|
||||
tasks such as text-guided image-to-image translation and upscaling. Similar to the txt2img sampling script,
|
||||
we provide a script to perform image modification with Stable Diffusion.
|
||||
By using a diffusion-denoising mechanism as first proposed by [SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different
|
||||
tasks such as text-guided image-to-image translation and upscaling. Similar to the txt2img sampling script,
|
||||
we provide a script to perform image modification with Stable Diffusion.
|
||||
|
||||
The following describes an example where a rough sketch made in [Pinta](https://www.pinta-project.com/) is converted into a detailed artwork.
|
||||
|
||||
```
|
||||
python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img <path-to-img.jpg> --strength 0.8
|
||||
```
|
||||
Here, strength is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image.
|
||||
|
||||
Here, strength is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image.
|
||||
Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. See the following example.
|
||||
|
||||
**Input**
|
||||
|
||||
![sketch-in](assets/stable-samples/img2img/sketch-mountains-input.jpg)
|
||||
![sketch-in](../assets/stable-samples/img2img/sketch-mountains-input.jpg)
|
||||
|
||||
**Outputs**
|
||||
|
||||
![out3](assets/stable-samples/img2img/mountains-3.png)
|
||||
![out2](assets/stable-samples/img2img/mountains-2.png)
|
||||
![out3](../assets/stable-samples/img2img/mountains-3.png)
|
||||
![out2](../assets/stable-samples/img2img/mountains-2.png)
|
||||
|
||||
This procedure can, for example, also be used to upscale samples from the base model.
|
||||
|
||||
|
||||
## Comments
|
||||
## Comments
|
||||
|
||||
- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)
|
||||
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
|
||||
Thanks for open-sourcing!
|
||||
|
||||
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
|
||||
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
|
||||
Thanks for open-sourcing!
|
||||
|
||||
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
|
||||
|
||||
## BibTeX
|
||||
|
||||
```
|
||||
@misc{rombach2021highresolution,
|
||||
title={High-Resolution Image Synthesis with Latent Diffusion Models},
|
||||
title={High-Resolution Image Synthesis with Latent Diffusion Models},
|
||||
author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
|
||||
year={2021},
|
||||
eprint={2112.10752},
|
||||
@ -206,5 +206,3 @@ Thanks for open-sourcing!
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
Before Width: | Height: | Size: 799 KiB After Width: | Height: | Size: 799 KiB |
Before Width: | Height: | Size: 499 KiB After Width: | Height: | Size: 499 KiB |
Before Width: | Height: | Size: 536 KiB After Width: | Height: | Size: 536 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 429 KiB After Width: | Height: | Size: 429 KiB |
Before Width: | Height: | Size: 445 KiB After Width: | Height: | Size: 445 KiB |
Before Width: | Height: | Size: 426 KiB After Width: | Height: | Size: 426 KiB |
Before Width: | Height: | Size: 427 KiB After Width: | Height: | Size: 427 KiB |
Before Width: | Height: | Size: 424 KiB After Width: | Height: | Size: 424 KiB |
50
docs/features/CLI.md
Normal file
@ -0,0 +1,50 @@
|
||||
# **Interactive Command-Line Interface**
|
||||
|
||||
The `dream.py` script, located in `scripts/dream.py`, provides an interactive interface to image generation similar to the "dream mothership" bot that Stable AI provided on its Discord server.
|
||||
|
||||
Unlike the txt2img.py and img2img.py scripts provided in the original CompViz/stable-diffusion source code repository, the time-consuming initialization of the AI model initialization only happens once. After that image generation
|
||||
from the command-line interface is very fast.
|
||||
|
||||
The script uses the readline library to allow for in-line editing, command history (up and down arrows), autocompletion, and more. To help keep track of which prompts generated which images, the script writes a log file of image names and prompts to the selected output directory.
|
||||
|
||||
In addition, as of version 1.02, it also writes the prompt into the PNG file's metadata where it can be retrieved using scripts/images2prompt.py
|
||||
|
||||
The script is confirmed to work on Linux, Windows and Mac systems.
|
||||
|
||||
_Note:_ This script runs from the command-line or can be used as a Web application. The Web GUI is currently rudimentary, but a much better replacement is on its way.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
|
||||
* Initializing, be patient...
|
||||
Loading model from models/ldm/text2img-large/model.ckpt
|
||||
(...more initialization messages...)
|
||||
|
||||
* Initialization done! Awaiting your command...
|
||||
dream> ashley judd riding a camel -n2 -s150
|
||||
Outputs:
|
||||
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
|
||||
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
|
||||
|
||||
dream> "there's a fly in my soup" -n6 -g
|
||||
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
|
||||
dream> q
|
||||
|
||||
# this shows how to retrieve the prompt stored in the saved image's metadata
|
||||
(ldm) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
|
||||
00009.png: "ashley judd riding a camel" -s150 -S 416354203
|
||||
00010.png: "ashley judd riding a camel" -s150 -S 1362479620
|
||||
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
```
|
||||
|
||||
<p align='center'>
|
||||
<img src="../assets/dream-py-demo.png"/>
|
||||
</p>
|
||||
|
||||
The `dream>` prompt's arguments are pretty much identical to those used in the Discord bot, except you don't need to type "!dream" (it doesn't
|
||||
hurt if you do). A significant change is that creation of individual images is now the default unless --grid (-g) is given.
|
||||
|
||||
For backward compatibility, the -i switch is recognized. For command-line help type -h (or --help) at the dream> prompt.
|
||||
|
||||
The script itself also recognizes a series of command-line switches that will change important global defaults, such as the directory for
|
||||
image outputs and the location of the model weight files.
|
15
docs/features/IMG2IMG.md
Normal file
@ -0,0 +1,15 @@
|
||||
# **Image-to-Image**
|
||||
|
||||
This script also provides an img2img feature that lets you seed your creations with an initial drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on top of the image you provide, preserving the original's basic shape and layout. To use it, provide the `--init_img` option as shown here:
|
||||
|
||||
```
|
||||
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore
|
||||
the original completely). The default is `0.75`, and ranges from `0.25-0.75` give interesting results.
|
||||
|
||||
You may also pass a `-v<count>` option to generate count variants on the original image. This is done by passing the first generated image back into img2img the requested number of times. It generates interesting variants.
|
||||
|
||||
If the initial image contains transparent regions, then Stable Diffusion will only draw within the transparent regions, a process
|
||||
called "inpainting". However, for this to work correctly, the color information underneath the transparent needs to be preserved, not erased. See [Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting) for details.
|
27
docs/features/INPAINTING.md
Normal file
@ -0,0 +1,27 @@
|
||||
# **Creating Transparent Regions for Inpainting**
|
||||
|
||||
Inpainting is really cool. To do it, you start with an initial image and use a photoeditor to make one or more regions transparent (i.e. they have a "hole" in them). You then provide the path to this image at the dream> command line using the `-I` switch. Stable Diffusion will only paint within the transparent region.
|
||||
|
||||
There's a catch. In the current implementation, you have to prepare the initial image correctly so that the underlying colors are preserved under the transparent area. Many imaging editing applications will by default erase the color information under the transparent pixels and replace them with white or black, which will lead to suboptimal inpainting. You also must take care to export the PNG file in such a way that the color information is preserved.
|
||||
|
||||
If your photoeditor is erasing the underlying color information, `dream.py` will give you a big fat warning. If you can't find a way to coax your photoeditor to retain color values under transparent areas, then you can combine the `-I` and `-M` switches to provide both the original unedited image and the masked (partially transparent) image:
|
||||
|
||||
```
|
||||
dream> man with cat on shoulder -I./images/man.png -M./images/man-transparent.png
|
||||
```
|
||||
|
||||
We are hoping to get rid of the need for this workaround in an upcoming release.
|
||||
|
||||
## Recipe for GIMP
|
||||
|
||||
[GIMP](https://www.gimp.org/) is a popular Linux photoediting tool.
|
||||
|
||||
1. Open image in GIMP.
|
||||
2. Layer->Transparency->Add Alpha Channel
|
||||
3. Use lasoo tool to select region to mask
|
||||
4. Choose Select -> Float to create a floating selection
|
||||
5. Open the Layers toolbar (^L) and select "Floating Selection"
|
||||
6. Set opacity to 0%
|
||||
7. Export as PNG
|
||||
8. In the export dialogue, Make sure the "Save colour values from
|
||||
transparent pixels" checkbox is selected.
|
116
docs/features/OTHER.md
Normal file
@ -0,0 +1,116 @@
|
||||
## **Google Colab**
|
||||
|
||||
Stable Diffusion AI Notebook: <a href="https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/Stable_Diffusion_AI_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <br>
|
||||
Open and follow instructions to use an isolated environment running Dream.<br>
|
||||
|
||||
Output Example:
|
||||
![Colab Notebook](../assets/colab_notebook.png)
|
||||
|
||||
---
|
||||
|
||||
## **Seamless Tiling**
|
||||
|
||||
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the `--seamless` option when starting the
|
||||
script which will result in all generated images to tile, or for each `dream>` prompt as shown here:
|
||||
|
||||
```
|
||||
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Reading Prompts from a File**
|
||||
|
||||
You can automate `dream.py` by providing a text file with the prompts you want to run, one line per prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor. Each line should look like what you would type at the dream> prompt:
|
||||
|
||||
```
|
||||
a beautiful sunny day in the park, children playing -n4 -C10
|
||||
stormy weather on a mountain top, goats grazing -s100
|
||||
innovative packaging for a squid's dinner -S137038382
|
||||
```
|
||||
|
||||
Then pass this file's name to `dream.py` when you invoke it:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
|
||||
```
|
||||
|
||||
You may read a series of prompts from standard input by providing a filename of `-`:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Shortcuts: Reusing Seeds**
|
||||
|
||||
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version 1.11. Provide a `**-S**` (or `**--seed**`)
|
||||
switch of `-1` to use the seed of the most recent image generated. If you produced multiple images with the `**-n**` switch, then you can go back further using -2, -3, etc. up to the first image generated by the previous command. Sorry, but you can't go back further than one command.
|
||||
|
||||
Here's an example of using this to do a quick refinement. It also illustrates using the new `**-G**` switch to turn on upscaling and face enhancement (see previous section):
|
||||
|
||||
```
|
||||
dream> a cute child playing hopscotch -G0.5
|
||||
[...]
|
||||
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
|
||||
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
|
||||
dream> a cute child playing hopscotch -G1.0 -s100 -S -1
|
||||
reusing previous seed 3498014304
|
||||
[...]
|
||||
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Weighted Prompts**
|
||||
|
||||
You may weight different sections of the prompt to tell the sampler to attach different levels of
|
||||
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight.
|
||||
For example consider this prompt:
|
||||
|
||||
```
|
||||
tabby cat:0.25 white duck:0.75 hybrid
|
||||
```
|
||||
|
||||
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75% on the white duck aspect (surprisingly, this example actually works). The prompt weights can
|
||||
use any combination of integers and floating point numbers, and they do not need to add up to 1.
|
||||
|
||||
---
|
||||
|
||||
## **Simplified API**
|
||||
|
||||
For programmers who wish to incorporate stable-diffusion into other products, this repository includes a simplified API for text to image generation, which lets you create images from a prompt in just three lines of code:
|
||||
|
||||
```
|
||||
from ldm.simplet2i import T2I
|
||||
model = T2I()
|
||||
outputs = model.txt2img("a unicorn in manhattan")
|
||||
```
|
||||
|
||||
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
|
||||
|
||||
Please see ldm/simplet2i.py for more information. A set of example scripts is coming RSN.
|
||||
|
||||
---
|
||||
|
||||
## **Preload Models**
|
||||
|
||||
In situations where you have limited internet connectivity or are blocked behind a firewall, you can use the preload script to preload the required files for Stable Diffusion to run.
|
||||
|
||||
The preload script `scripts/preload_models.py` needs to be run once at least while connected to the internet. In the following runs, it will load up the cached versions of the required files from the `.cache` directory of the system.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/preload_models.py
|
||||
preloading bert tokenizer...
|
||||
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 226k/226k [00:00<00:00, 2.79MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 455k/455k [00:00<00:00, 4.36MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 570/570 [00:00<00:00, 477kB/s]
|
||||
...success
|
||||
preloading kornia requirements...
|
||||
Downloading: "https://github.com/DagnyT/hardnet/raw/master/pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth" to /u/lstein/.cache/torch/hub/checkpoints/checkpoint_liberty_with_aug.pth
|
||||
100%|███████████████████████████████████████████████| 5.10M/5.10M [00:00<00:00, 101MB/s]
|
||||
...success
|
||||
```
|
57
docs/features/TEXTUAL_INVERSION.md
Normal file
@ -0,0 +1,57 @@
|
||||
# **Personalizing Text-to-Image Generation**
|
||||
|
||||
You may personalize the generated images to provide your own styles or objects by training a new LDM checkpoint and introducing a new vocabulary to the fixed model.
|
||||
|
||||
To train, prepare a folder that contains images sized at 512x512 and execute the following:
|
||||
|
||||
**WINDOWS**: As the default backend is not available on Windows, if you're using that platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND=gloo`
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
|
||||
-t \
|
||||
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
|
||||
-n my_cat \
|
||||
--gpus 0, \
|
||||
--data_root D:/textual-inversion/my_cat \
|
||||
--init_word 'cat'
|
||||
```
|
||||
|
||||
During the training process, files will be created in /logs/[project][time][project]/
|
||||
where you can see the process.
|
||||
|
||||
Conditioning contains the training prompts
|
||||
inputs, reconstruction the input images for the training epoch samples, samples scaled for a sample of the prompt and one with the init word provided.
|
||||
|
||||
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
|
||||
|
||||
_Note_: According to the associated paper, the optimal number of images is 3-5. Your model may not converge if you use more images than that.
|
||||
|
||||
Training will run indefinately, but you may wish to stop it before the heat death of the universe, when you find a low loss epoch or around ~5000 iterations.
|
||||
|
||||
Once the model is trained, specify the trained .pt file when starting dream using
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt --full_precision
|
||||
```
|
||||
|
||||
Then, to utilize your subject at the dream prompt
|
||||
|
||||
```
|
||||
dream> "a photo of *"
|
||||
```
|
||||
|
||||
This also works with image2image
|
||||
|
||||
```
|
||||
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
It's also possible to train multiple token (modify the placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine LDM checkpoints using:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/merge_embeddings.py \
|
||||
--manager_ckpts /path/to/first/embedding.pt /path/to/second/embedding.pt [...] \
|
||||
--output_path /path/to/output/embedding.pt
|
||||
```
|
||||
|
||||
Credit goes to rinongal and the repository located at https://github.com/rinongal/textual_inversion Please see the repository and associated paper for details and limitations.
|
98
docs/features/UPSCALE.md
Normal file
@ -0,0 +1,98 @@
|
||||
# **GFPGAN and Real-ESRGAN Support**
|
||||
|
||||
The script also provides the ability to do face restoration and
|
||||
upscaling with the help of GFPGAN and Real-ESRGAN respectively.
|
||||
|
||||
To use the ability, clone the **[GFPGAN
|
||||
repository](https://github.com/TencentARC/GFPGAN)** and follow their
|
||||
installation instructions. By default, we expect GFPGAN to be
|
||||
installed in a 'GFPGAN' sibling directory. Be sure that the `"ldm"`
|
||||
conda environment is active as you install GFPGAN.
|
||||
|
||||
You can use the `--gfpgan_dir` argument with `dream.py` to set a
|
||||
custom path to your GFPGAN directory. _There are other GFPGAN related
|
||||
boot arguments if you wish to customize further._
|
||||
|
||||
You can install **Real-ESRGAN** by typing the following command.
|
||||
|
||||
```
|
||||
pip install realesrgan
|
||||
```
|
||||
|
||||
**Note: Internet connection needed:**
|
||||
Users whose GPU machines are isolated from the Internet (e.g. on a
|
||||
University cluster) should be aware that the first time you run
|
||||
dream.py with GFPGAN and Real-ESRGAN turned on, it will try to
|
||||
download model files from the Internet. To rectify this, you may run
|
||||
`python3 scripts/preload_models.py` after you have installed GFPGAN
|
||||
and all its dependencies.
|
||||
|
||||
**Usage**
|
||||
|
||||
You will now have access to two new prompt arguments.
|
||||
|
||||
**Upscaling**
|
||||
|
||||
`-U : <upscaling_factor> <upscaling_strength>`
|
||||
|
||||
The upscaling prompt argument takes two values. The first value is a
|
||||
scaling factor and should be set to either `2` or `4` only. This will
|
||||
either scale the image 2x or 4x respectively using different models.
|
||||
|
||||
You can set the scaling stength between `0` and `1.0` to control
|
||||
intensity of the of the scaling. This is handy because AI upscalers
|
||||
generally tend to smooth out texture details. If you wish to retain
|
||||
some of those for natural looking results, we recommend using values
|
||||
between `0.5 to 0.8`.
|
||||
|
||||
If you do not explicitly specify an upscaling_strength, it will
|
||||
default to 0.75.
|
||||
|
||||
**Face Restoration**
|
||||
|
||||
`-G : <gfpgan_strength>`
|
||||
|
||||
This prompt argument controls the strength of the face restoration
|
||||
that is being applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
|
||||
|
||||
You can use either one or both without any conflicts. In cases where
|
||||
you use both, the image will be first upscaled and then the face
|
||||
restoration process will be executed to ensure you get the highest
|
||||
quality facial features.
|
||||
|
||||
`--save_orig`
|
||||
|
||||
When you use either `-U` or `-G`, the final result you get is upscaled
|
||||
or face modified. If you want to save the original Stable Diffusion
|
||||
generation, you can use the `-save_orig` prompt argument to save the
|
||||
original unaffected version too.
|
||||
|
||||
**Example Usage**
|
||||
|
||||
```
|
||||
dream > superman dancing with a panda bear -U 2 0.6 -G 0.4
|
||||
```
|
||||
|
||||
This also works with img2img:
|
||||
|
||||
```
|
||||
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
|
||||
```
|
||||
|
||||
**Note**
|
||||
|
||||
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid
|
||||
crashes and memory overloads during the Stable Diffusion process,
|
||||
these effects are applied after Stable Diffusion has completed its
|
||||
work.
|
||||
|
||||
In single image generations, you will see the output right away but
|
||||
when you are using multiple iterations, the images will first be
|
||||
generated and then upscaled and face restored after that process is
|
||||
complete. While the image generation is taking place, you will still
|
||||
be able to preview the base images.
|
||||
|
||||
If you wish to stop during the image generation but want to upscale or
|
||||
face restore a particular generated image, pass it again with the same
|
||||
prompt and generated seed along with the `-U` and `-G` prompt
|
||||
arguments to perform those actions.
|
@ -1,28 +1,28 @@
|
||||
# Cheat Sheat for Generating Variations
|
||||
# **Variations**
|
||||
|
||||
Release 1.13 of SD-Dream adds support for image variations. There are two things that you can do:
|
||||
Release 1.13 of SD-Dream adds support for image variations.
|
||||
|
||||
1. Generate a series of systematic variations of an image, given a
|
||||
prompt. The amount of variation from one image to the next can be
|
||||
controlled.
|
||||
You are able to do the following:
|
||||
|
||||
2. Given two or more variations that you like, you can combine them in
|
||||
a weighted fashion
|
||||
1. Generate a series of systematic variations of an image, given a prompt. The amount of variation from one image to the next can be controlled.
|
||||
|
||||
This cheat sheet provides a quick guide for how this works in
|
||||
practice, using variations to create the desired image of Xena,
|
||||
Warrior Princess.
|
||||
2. Given two or more variations that you like, you can combine them in a weighted fashion.
|
||||
|
||||
## Step 1 -- find a base image that you like
|
||||
---
|
||||
|
||||
The prompt we will use throughout is "lucy lawless as xena, warrior
|
||||
princess, character portrait, high resolution." This will be indicated
|
||||
as "prompt" in the examples below.
|
||||
This cheat sheet provides a quick guide for how this works in practice, using variations to create the desired image of Xena, Warrior Princess.
|
||||
|
||||
First we let SD create a series of images in the usual way, in this case
|
||||
requesting six iterations:
|
||||
---
|
||||
|
||||
~~~
|
||||
## Step 1 -- Find a base image that you like
|
||||
|
||||
The prompt we will use throughout is `lucy lawless as xena, warrior princess, character portrait, high resolution.`
|
||||
|
||||
This will be indicated as `prompt` in the examples below.
|
||||
|
||||
First we let SD create a series of images in the usual way, in this case requesting six iterations:
|
||||
|
||||
```
|
||||
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
|
||||
...
|
||||
Outputs:
|
||||
@ -32,19 +32,21 @@ Outputs:
|
||||
./outputs/Xena/000001.2224800325.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S2224800325
|
||||
./outputs/Xena/000001.465250761.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S465250761
|
||||
./outputs/Xena/000001.3357757885.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S3357757885
|
||||
~~~
|
||||
```
|
||||
|
||||
The one with seed 3357757885 looks nice:
|
||||
|
||||
<img src="static/variation_walkthru/000001.3357757885.png"/>
|
||||
<img src="assets/variation_walkthru/000001.3357757885.png"/>
|
||||
|
||||
Let's try to generate some variations. Using the same seed, we pass
|
||||
the argument -v0.1 (or --variant_amount), which generates a series of
|
||||
variations each differing by a variation amount of 0.2. This number
|
||||
ranges from 0 to 1.0, with higher numbers being larger amounts of
|
||||
---
|
||||
|
||||
## Step 2 - Generating Variations
|
||||
|
||||
Let's try to generate some variations. Using the same seed, we pass the argument `-v0.1` (or --variant_amount), which generates a series of
|
||||
variations each differing by a variation amount of 0.2. This number ranges from `0` to `1.0`, with higher numbers being larger amounts of
|
||||
variation.
|
||||
|
||||
~~~
|
||||
```
|
||||
dream> "prompt" -n6 -S3357757885 -v0.2
|
||||
...
|
||||
Outputs:
|
||||
@ -54,45 +56,36 @@ Outputs:
|
||||
./outputs/Xena/000002.4116285959.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 4116285959:0.2 -S3357757885
|
||||
./outputs/Xena/000002.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1614299449:0.2 -S3357757885
|
||||
./outputs/Xena/000002.1335553075.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1335553075:0.2 -S3357757885
|
||||
~~~
|
||||
```
|
||||
|
||||
Note that the output for each image has a -V option giving the
|
||||
"variant subseed" for that image, consisting of a seed followed by the
|
||||
### **Variation Sub Seeding**
|
||||
|
||||
Note that the output for each image has a `-V` option giving the "variant subseed" for that image, consisting of a seed followed by the
|
||||
variation amount used to generate it.
|
||||
|
||||
This gives us a series of closely-related variations, including the
|
||||
two shown here.
|
||||
This gives us a series of closely-related variations, including the two shown here.
|
||||
|
||||
<img src="static/variation_walkthru/000002.3647897225.png">
|
||||
<img src="static/variation_walkthru/000002.1614299449.png">
|
||||
<img src="assets/variation_walkthru/000002.3647897225.png">
|
||||
<img src="assets/variation_walkthru/000002.1614299449.png">
|
||||
|
||||
I like the expression on Xena's face in the first one (subseed 3647897225), and the armor on her shoulder in the second one (subseed 1614299449). Can we combine them to get the best of both worlds?
|
||||
|
||||
I like the expression on Xena's face in the first one (subseed
|
||||
3647897225), and the armor on her shoulder in the second one (subseed
|
||||
1614299449). Can we combine them to get the best of both worlds?
|
||||
|
||||
We combine the two variations using -V (--with_variations). Again, we
|
||||
must provide the seed for the originally-chosen image in order for
|
||||
We combine the two variations using `-V` (--with_variations). Again, we must provide the seed for the originally-chosen image in order for
|
||||
this to work.
|
||||
|
||||
~~~
|
||||
```
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1;1614299449,0.1
|
||||
Outputs:
|
||||
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
|
||||
~~~
|
||||
```
|
||||
|
||||
Here we are providing equal weights (0.1 and 0.1) for both the
|
||||
subseeds. The resulting image is close, but not exactly what I
|
||||
wanted:
|
||||
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The resulting image is close, but not exactly what I wanted:
|
||||
|
||||
<img src="static/variation_walkthru/000003.1614299449.png">
|
||||
<img src="assets/variation_walkthru/000003.1614299449.png">
|
||||
|
||||
We could either try combining the images with different weights, or we
|
||||
can generate more variations around the almost-but-not-quite image. We
|
||||
do the latter, using both the -V (combining) and -v (variation
|
||||
strength) options. Note that we use -n6 to generate 6 variations:
|
||||
We could either try combining the images with different weights, or we can generate more variations around the almost-but-not-quite image. We do the latter, using both the `-V` (combining) and `-v` (variation strength) options. Note that we use `-n6` to generate 6 variations:
|
||||
|
||||
~~~~
|
||||
```
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1;1614299449,0.1 -v0.05 -n6
|
||||
Outputs:
|
||||
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
|
||||
@ -101,13 +94,11 @@ Outputs:
|
||||
./outputs/Xena/000004.2664260391.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2664260391:0.05 -S3357757885
|
||||
./outputs/Xena/000004.1642517170.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,1642517170:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2183375608.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2183375608:0.05 -S3357757885
|
||||
~~~~
|
||||
```
|
||||
|
||||
This produces six images, all slight variations on the combination of
|
||||
the chosen two images. Here's the one I like best:
|
||||
This produces six images, all slight variations on the combination of the chosen two images. Here's the one I like best:
|
||||
|
||||
<img src="static/variation_walkthru/000004.3747154981.png">
|
||||
<img src="assets/variation_walkthru/000004.3747154981.png">
|
||||
|
||||
As you can see, this is a very powerful tool, which when combined with
|
||||
subprompt weighting, gives you great control over the content and
|
||||
As you can see, this is a very powerful tool, which when combined with subprompt weighting, gives you great control over the content and
|
||||
quality of your generated images.
|
13
docs/features/WEB.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Barebones Web Server
|
||||
|
||||
As of version 1.10, this distribution comes with a bare bones web server (see screenshot). To use it, run the `dream.py` script by adding the `**--web**` option.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
|
||||
```
|
||||
|
||||
You can then connect to the server by pointing your web browser at http://localhost:9090, or to the network name or IP address of the server.
|
||||
|
||||
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this code, and to [dagf2101](https://github.com/dagf2101) for refining it.
|
||||
|
||||
![Dream Web Server](../assets/dream_web_server.png)
|
68
docs/help/TROUBLESHOOT.md
Normal file
@ -0,0 +1,68 @@
|
||||
# **Frequently Asked Questions**
|
||||
|
||||
Here are a few common installation problems and their solutions. Often these are caused by incomplete installations or crashes during the
|
||||
install process.
|
||||
|
||||
---
|
||||
|
||||
**QUESTION**
|
||||
|
||||
During `conda env create -f environment.yaml`, conda hangs indefinitely.
|
||||
|
||||
**SOLUTION**
|
||||
|
||||
Enter the stable-diffusion directory and completely remove the `src` directory and all its contents. The safest way to do this is to enter the stable-diffusion directory and give the command `git clean -f`. If this still doesn't fix the problem, try "conda clean -all" and then restart at the `conda env create` step.
|
||||
|
||||
---
|
||||
|
||||
**QUESTION**
|
||||
|
||||
`dream.py` crashes with the complaint that it can't find `ldm.simplet2i.py`. Or it complains that function is being passed incorrect parameters.
|
||||
|
||||
**SOLUTION**
|
||||
|
||||
Reinstall the stable diffusion modules. Enter the `stable-diffusion` directory and give the command `pip install -e .`
|
||||
|
||||
---
|
||||
|
||||
**QUESTION**
|
||||
|
||||
`dream.py` dies, complaining of various missing modules, none of which starts with `ldm``.
|
||||
|
||||
**SOLUTION**
|
||||
|
||||
From within the `stable-diffusion` directory, run `conda env update -f environment.yaml` This is also frequently the solution to
|
||||
complaints about an unknown function in a module.
|
||||
|
||||
---
|
||||
|
||||
**QUESTION**
|
||||
|
||||
There's a feature or bugfix in the Stable Diffusion GitHub that you want to try out.
|
||||
|
||||
**SOLUTION**
|
||||
|
||||
**Main Branch**
|
||||
|
||||
If the fix/feature is on the `main` branch, enter the stable-diffusion directory and do a `git pull`.
|
||||
|
||||
Usually this will be sufficient, but if you start to see errors about missing or incorrect modules, use the command `pip install -e .` and/or `conda env update -f environment.yaml` (These commands won't break anything.)
|
||||
|
||||
**Sub Branch**
|
||||
|
||||
If the feature/fix is on a branch (e.g. "_foo-bugfix_"), the recipe is similar, but do a `git pull <name of branch>`.
|
||||
|
||||
**Not Committed**
|
||||
|
||||
If the feature/fix is in a pull request that has not yet been made part of the main branch or a feature/bugfix branch, then from the page for the desired pull request, look for the line at the top that reads "_xxxx wants to merge xx commits into lstein:main from YYYYYY_". Copy the URL in YYYY. It should have the format `https://github.com/<name of contributor>/stable-diffusion/tree/<name of branch>`
|
||||
|
||||
Then **go to the directory above stable-diffusion** and rename the directory to "_stable-diffusion.lstein_", "_stable-diffusion.old_", or anything else. You can then git clone the branch that contains the pull request:
|
||||
|
||||
```
|
||||
git clone https://github.com/<name of contributor>/stable-diffusion/tree/<name
|
||||
of branch>
|
||||
```
|
||||
|
||||
You will need to go through the install procedure again, but it should be fast because all the dependencies are already loaded.
|
||||
|
||||
---
|
89
docs/installation/INSTALL_LINUX.md
Normal file
@ -0,0 +1,89 @@
|
||||
# **Linux Installation**
|
||||
|
||||
1. You will need to install the following prerequisites if they are not already available. Use your operating system's preferred installer
|
||||
|
||||
- Python (version 3.8.5 recommended; higher may work)
|
||||
- git
|
||||
|
||||
2. Install the Python Anaconda environment manager.
|
||||
|
||||
```
|
||||
~$ wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ chmod +x Anaconda3-2022.05-Linux-x86_64.sh
|
||||
~$ ./Anaconda3-2022.05-Linux-x86_64.sh
|
||||
```
|
||||
|
||||
After installing anaconda, you should log out of your system and log back in. If the installation
|
||||
worked, your command prompt will be prefixed by the name of the current anaconda environment - `(base)`.
|
||||
|
||||
3. Copy the stable-diffusion source code from GitHub:
|
||||
|
||||
```
|
||||
(base) ~$ git clone https://github.com/lstein/stable-diffusion.git
|
||||
```
|
||||
|
||||
This will create stable-diffusion folder where you will follow the rest of the steps.
|
||||
|
||||
4. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
|
||||
|
||||
```
|
||||
(base) ~$ cd stable-diffusion
|
||||
(base) ~/stable-diffusion$
|
||||
```
|
||||
|
||||
5. Use anaconda to copy necessary python packages, create a new python environment named `ldm` and activate the environment.
|
||||
|
||||
```
|
||||
(base) ~/stable-diffusion$ conda env create -f environment.yaml
|
||||
(base) ~/stable-diffusion$ conda activate ldm
|
||||
(ldm) ~/stable-diffusion$
|
||||
```
|
||||
|
||||
After these steps, your command prompt will be prefixed by `(ldm)` as shown above.
|
||||
|
||||
6. Load a couple of small machine-learning models required by stable diffusion:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
|
||||
```
|
||||
|
||||
Note that this step is necessary because I modified the original just-in-time model loading scheme to allow the script to work on GPU machines that are not internet connected. See [Preload Models](../features/OTHER.md#preload-models)
|
||||
|
||||
7. Now you need to install the weights for the stable diffusion model.
|
||||
|
||||
- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
|
||||
- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
|
||||
- You may be asked to sign a license agreement at this point.
|
||||
- Click on "Files and versions" near the top of the page, and then click on the file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click the "download" link. Save the file somewhere safe on your local machine.
|
||||
|
||||
Now run the following commands from within the stable-diffusion directory. This will create a symbolic link from the stable-diffusion model.ckpt file, to the true location of the sd-v1-4.ckpt file.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
|
||||
(ldm) ~/stable-diffusion$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
|
||||
```
|
||||
|
||||
8. Start generating images!
|
||||
|
||||
```
|
||||
# for the pre-release weights use the -l or --liaon400m switch
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -l
|
||||
|
||||
# for the post-release weights do not use the switch
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py
|
||||
|
||||
# for additional configuration switches and arguments, use -h or --help
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -h
|
||||
```
|
||||
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `stable-diffusion` directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
|
||||
|
||||
### Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter `stable-diffusion` and type:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ git pull
|
||||
```
|
||||
|
||||
This will bring your local copy into sync with the remote one.
|
@ -1,21 +1,17 @@
|
||||
# macOS Instructions
|
||||
# **macOS Instructions**
|
||||
|
||||
Requirements
|
||||
|
||||
- macOS 12.3 Monterey or later
|
||||
- Python
|
||||
- Patience
|
||||
- Apple Silicon*
|
||||
- Apple Silicon\*
|
||||
|
||||
*I haven't tested any of this on Intel Macs but I have read that one person got
|
||||
it to work, so Apple Silicon might not be requried.
|
||||
\*I haven't tested any of this on Intel Macs but I have read that one person got it to work, so Apple Silicon might not be requried.
|
||||
|
||||
Things have moved really fast and so these instructions change often and are
|
||||
often out-of-date. One of the problems is that there are so many different ways to
|
||||
run this.
|
||||
Things have moved really fast and so these instructions change often and are often out-of-date. One of the problems is that there are so many different ways to run this.
|
||||
|
||||
We are trying to build a testing setup so that when we make changes it doesn't
|
||||
always break.
|
||||
We are trying to build a testing setup so that when we make changes it doesn't always break.
|
||||
|
||||
How to (this hasn't been 100% tested yet):
|
||||
|
||||
@ -23,7 +19,7 @@ First get the weights checkpoint download started - it's big:
|
||||
|
||||
1. Sign up at https://huggingface.co
|
||||
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
|
||||
3. Accept the terms and click Access Repository:
|
||||
3. Accept the terms and click Access Repository:
|
||||
4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder)
|
||||
|
||||
While that is downloading, open Terminal and run the following commands one at a time.
|
||||
@ -38,16 +34,16 @@ While that is downloading, open Terminal and run the following commands one at a
|
||||
# 2. No pyenv
|
||||
#
|
||||
# If you don't know what we are talking about, choose 2.
|
||||
#
|
||||
#
|
||||
# NOW EITHER DO
|
||||
# 1. Installing alongside pyenv
|
||||
# 1. Installing alongside pyenv
|
||||
|
||||
brew install pyenv-virtualenv # you might have this from before, no problem
|
||||
pyenv install anaconda3-latest
|
||||
pyenv virtualenv anaconda3-latest lstein-stable-diffusion
|
||||
pyenv activate lstein-stable-diffusion
|
||||
|
||||
# OR,
|
||||
# OR,
|
||||
# 2. Installing standalone
|
||||
# install python 3, git, cmake, protobuf:
|
||||
brew install cmake protobuf rust
|
||||
@ -92,42 +88,35 @@ The original scripts should work as well.
|
||||
python scripts/orig_scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
||||
```
|
||||
|
||||
Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env create -f environment-mac.yaml`
|
||||
never finishing in some situations. So it isn't required but wont hurt.
|
||||
Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env create -f environment-mac.yaml` never finishing in some situations. So it isn't required but wont hurt.
|
||||
|
||||
After you follow all the instructions and run dream.py you might get several
|
||||
errors. Here's the errors I've seen and found solutions for.
|
||||
After you follow all the instructions and run dream.py you might get several errors. Here's the errors I've seen and found solutions for.
|
||||
|
||||
### Is it slow?
|
||||
|
||||
Be sure to specify 1 sample and 1 iteration.
|
||||
|
||||
python ./scripts/orig_scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
|
||||
python ./scripts/orig_scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
|
||||
|
||||
### Doesn't work anymore?
|
||||
|
||||
PyTorch nightly includes support for MPS. Because of this, this setup is
|
||||
inherently unstable. One morning I woke up and it no longer worked no matter
|
||||
what I did until I switched to miniforge. However, I have another Mac that works
|
||||
just fine with Anaconda. If you can't get it to work, please search a little
|
||||
first because many of the errors will get posted and solved. If you can't find
|
||||
a solution please [create an issue](https://github.com/lstein/stable-diffusion/issues).
|
||||
PyTorch nightly includes support for MPS. Because of this, this setup is inherently unstable. One morning I woke up and it no longer worked no matter what I did until I switched to miniforge. However, I have another Mac that works just fine with Anaconda. If you can't get it to work, please search a little first because many of the errors will get posted and solved. If you can't find a solution please [create an issue](https://github.com/lstein/stable-diffusion/issues).
|
||||
|
||||
One debugging step is to update to the latest version of PyTorch nightly.
|
||||
|
||||
conda install pytorch torchvision torchaudio -c pytorch-nightly
|
||||
conda install pytorch torchvision torchaudio -c pytorch-nightly
|
||||
|
||||
If `conda env create -f environment-mac.yaml` takes forever run this.
|
||||
|
||||
git clean -f
|
||||
git clean -f
|
||||
|
||||
And run this.
|
||||
|
||||
conda clean --yes --all
|
||||
conda clean --yes --all
|
||||
|
||||
Or you could reset Anaconda.
|
||||
|
||||
conda update --force-reinstall -y -n base -c defaults conda
|
||||
conda update --force-reinstall -y -n base -c defaults conda
|
||||
|
||||
### "No module named cv2", torch, 'ldm', 'transformers', 'taming', etc.
|
||||
|
||||
@ -144,17 +133,14 @@ The cause of this error is long so it's below.
|
||||
Third, if it says you're missing taming you need to rebuild your virtual
|
||||
environment.
|
||||
|
||||
conda env remove -n ldm
|
||||
conda env create -f environment-mac.yaml
|
||||
conda env remove -n ldm
|
||||
conda env create -f environment-mac.yaml
|
||||
|
||||
Fourth, If you have activated the ldm virtual environment and tried rebuilding
|
||||
it, maybe the problem could be that I have something installed that
|
||||
you don't and you'll just need to manually install it. Make sure you
|
||||
activate the virtual environment so it installs there instead of
|
||||
Fourth, If you have activated the ldm virtual environment and tried rebuilding it, maybe the problem could be that I have something installed that you don't and you'll just need to manually install it. Make sure you activate the virtual environment so it installs there instead of
|
||||
globally.
|
||||
|
||||
conda activate ldm
|
||||
pip install *name*
|
||||
conda activate ldm
|
||||
pip install *name*
|
||||
|
||||
You might also need to install Rust (I mention this again below).
|
||||
|
||||
@ -166,8 +152,8 @@ picking the wrong one. More specifically, preload_models.py and dream.py says to
|
||||
find the first `python3` in the path environment variable. You can see which one
|
||||
it is picking with `which python3`. These are the mostly likely paths you'll see.
|
||||
|
||||
% which python3
|
||||
/usr/bin/python3
|
||||
% which python3
|
||||
/usr/bin/python3
|
||||
|
||||
The above path is part of the OS. However, that path is a stub that asks you if
|
||||
you want to install Xcode. If you have Xcode installed already,
|
||||
@ -175,14 +161,14 @@ you want to install Xcode. If you have Xcode installed already,
|
||||
/Applications/Xcode.app/Contents/Developer/usr/bin/python3 (depending on which
|
||||
Xcode you've selected with `xcode-select`).
|
||||
|
||||
% which python3
|
||||
/opt/homebrew/bin/python3
|
||||
% which python3
|
||||
/opt/homebrew/bin/python3
|
||||
|
||||
If you installed python3 with Homebrew and you've modified your path to search
|
||||
for Homebrew binaries before system ones, you'll see the above path.
|
||||
|
||||
% which python
|
||||
/opt/anaconda3/bin/python
|
||||
% which python
|
||||
/opt/anaconda3/bin/python
|
||||
|
||||
If you drop the "3" you get an entirely different python. Note: starting in
|
||||
macOS 12.3, /usr/bin/python no longer exists (it was python 2 anyway).
|
||||
@ -190,8 +176,8 @@ macOS 12.3, /usr/bin/python no longer exists (it was python 2 anyway).
|
||||
If you have Anaconda installed, this is what you'll see. There is a
|
||||
/opt/anaconda3/bin/python3 also.
|
||||
|
||||
(ldm) % which python
|
||||
/Users/name/miniforge3/envs/ldm/bin/python
|
||||
(ldm) % which python
|
||||
/Users/name/miniforge3/envs/ldm/bin/python
|
||||
|
||||
This is what you'll see if you have miniforge and you've correctly activated
|
||||
the ldm environment. This is the goal.
|
||||
@ -215,11 +201,11 @@ Tired of waiting for your renders to finish before you can see if it
|
||||
works? Reduce the steps! The image quality will be horrible but at least you'll
|
||||
get quick feedback.
|
||||
|
||||
python ./scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
|
||||
python ./scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1
|
||||
|
||||
### OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'...
|
||||
|
||||
python scripts/preload_models.py
|
||||
python scripts/preload_models.py
|
||||
|
||||
### "The operator [name] is not current implemented for the MPS device." (sic)
|
||||
|
||||
@ -236,16 +222,16 @@ The lstein branch includes this fix in [environment-mac.yaml](https://github.com
|
||||
|
||||
I have not seen this error because I had Rust installed on my computer before I started playing with Stable Diffusion. The fix is to install Rust.
|
||||
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
|
||||
### How come `--seed` doesn't work?
|
||||
|
||||
First this:
|
||||
|
||||
> Completely reproducible results are not guaranteed across PyTorch
|
||||
releases, individual commits, or different platforms. Furthermore,
|
||||
results may not be reproducible between CPU and GPU executions, even
|
||||
when using identical seeds.
|
||||
> releases, individual commits, or different platforms. Furthermore,
|
||||
> results may not be reproducible between CPU and GPU executions, even
|
||||
> when using identical seeds.
|
||||
|
||||
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
|
||||
|
||||
@ -254,7 +240,7 @@ still working on it.
|
||||
|
||||
### libiomp5.dylib error?
|
||||
|
||||
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
|
||||
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
|
||||
|
||||
You are likely using an Intel package by mistake. Be sure to run conda with
|
||||
the environment variable `CONDA_SUBDIR=osx-arm64`, like so:
|
||||
@ -266,7 +252,7 @@ a dependency. [nomkl](https://stackoverflow.com/questions/66224879/what-is-the-n
|
||||
is a metapackage designed to prevent this, by making it impossible to install
|
||||
`mkl`, but if your environment is already broken it may not work.
|
||||
|
||||
Do *not* use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
|
||||
Do _not_ use `os.environ['KMP_DUPLICATE_LIB_OK']='True'` or equivalents as this
|
||||
masks the underlying issue of using Intel packages.
|
||||
|
||||
### Not enough memory.
|
||||
@ -281,7 +267,7 @@ affect the quality of the images though.
|
||||
|
||||
See [this issue](https://github.com/CompVis/stable-diffusion/issues/71).
|
||||
|
||||
### "Error: product of dimension sizes > 2**31'"
|
||||
### "Error: product of dimension sizes > 2\*\*31'"
|
||||
|
||||
This error happens with img2img, which I haven't played with too much
|
||||
yet. But I know it's because your image is too big or the resolution
|
||||
@ -291,7 +277,7 @@ output size (which is the default). However, if you're using that size
|
||||
and you get the above error, try 256 x 256 or 512 x 256 or something
|
||||
as the source image.
|
||||
|
||||
BTW, 2**31-1 = [2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in C.
|
||||
BTW, 2\*\*31-1 = [2,147,483,647](https://en.wikipedia.org/wiki/2,147,483,647#In_computing), which is also 32-bit signed [LONG_MAX](https://en.wikipedia.org/wiki/C_data_types) in C.
|
||||
|
||||
### I just got Rickrolled! Do I have a virus?
|
||||
|
||||
@ -331,10 +317,10 @@ change instead. This is a 32-bit vs 16-bit problem.
|
||||
|
||||
What? Intel? On an Apple Silicon?
|
||||
|
||||
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library.
|
||||
The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions.
|
||||
The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions.
|
||||
The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library.
|
||||
The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions.
|
||||
The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions.
|
||||
The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.
|
||||
|
||||
This is due to the Intel `mkl` package getting picked up when you try to install
|
||||
something that depends on it-- Rosetta can translate some Intel instructions but
|
||||
@ -342,7 +328,7 @@ not the specialized ones here. To avoid this, make sure to use the environment
|
||||
variable `CONDA_SUBDIR=osx-arm64`, which restricts the Conda environment to only
|
||||
use ARM packages, and use `nomkl` as described above.
|
||||
|
||||
### input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
|
||||
### input types 'tensor<2x1280xf32>' and 'tensor<\*xf16>' are not broadcast compatible
|
||||
|
||||
May appear when just starting to generate, e.g.:
|
||||
|
||||
@ -355,6 +341,6 @@ LLVM ERROR: Failed to infer result type(s).
|
||||
Abort trap: 6
|
||||
/Users/[...]/opt/anaconda3/envs/ldm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
|
||||
warnings.warn('resource_tracker: There appear to be %d '
|
||||
```
|
||||
```
|
||||
|
||||
Macs do not support autocast/mixed-precision. Supply `--full_precision` to use float32 everywhere.
|
||||
Macs do not support autocast/mixed-precision. Supply `--full_precision` to use float32 everywhere.
|
102
docs/installation/INSTALL_WINDOWS.md
Normal file
@ -0,0 +1,102 @@
|
||||
# **Windows Installation**
|
||||
|
||||
## **Notebook install (semi-automated)**
|
||||
|
||||
We have a [Jupyter notebook](https://github.com/lstein/stable-diffusion/blob/main/Stable-Diffusion-local-Windows.ipynb) with cell-by-cell installation steps. It will download the code in this repo as one of the steps, so instead of cloning this repo, simply download the notebook from the link above and load it up in VSCode (with the appropriate extensions installed)/Jupyter/JupyterLab and start running the cells one-by-one.
|
||||
|
||||
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand - simplified
|
||||
[step-by-step instructions](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install) are available in the wiki (you'll only need steps 1, 2, & 3 ).
|
||||
|
||||
## **Manual Install**
|
||||
|
||||
### **pip**
|
||||
|
||||
See [Easy-peasy Windows install](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
in the wiki
|
||||
|
||||
### **Conda**
|
||||
|
||||
1. Install Anaconda3 (miniconda3 version) from here: https://docs.anaconda.com/anaconda/install/windows/
|
||||
|
||||
2. Install Git from here: https://git-scm.com/download/win
|
||||
|
||||
3. Launch Anaconda from the Windows Start menu. This will bring up a command window. Type all the remaining commands in this window.
|
||||
|
||||
4. Run the command:
|
||||
|
||||
```
|
||||
git clone https://github.com/lstein/stable-diffusion.git
|
||||
```
|
||||
|
||||
This will create stable-diffusion folder where you will follow the rest of the steps.
|
||||
|
||||
5. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
|
||||
|
||||
```
|
||||
cd stable-diffusion
|
||||
```
|
||||
|
||||
6. Run the following two commands:
|
||||
|
||||
```
|
||||
conda env create -f environment.yaml (step 6a)
|
||||
conda activate ldm (step 6b)
|
||||
```
|
||||
|
||||
This will install all python requirements and activate the "ldm" environment which sets PATH and other environment variables properly.
|
||||
|
||||
7. Run the command:
|
||||
|
||||
```
|
||||
python scripts\preload_models.py
|
||||
```
|
||||
|
||||
This installs several machine learning models that stable diffusion requires.
|
||||
|
||||
Note: This step is required. This was done because some users may might be blocked by firewalls or have limited internet connectivity for the models to be downloaded just-in-time.
|
||||
|
||||
8. Now you need to install the weights for the big stable diffusion model.
|
||||
|
||||
- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
|
||||
- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
|
||||
- You may be asked to sign a license agreement at this point.
|
||||
- Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
|
||||
prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
|
||||
- The weight file is >4 GB in size, so
|
||||
downloading may take a while.
|
||||
|
||||
Now run the following commands from **within the stable-diffusion directory** to copy the weights file to the right place:
|
||||
|
||||
```
|
||||
mkdir -p models\ldm\stable-diffusion-v1
|
||||
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
|
||||
```
|
||||
|
||||
Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
|
||||
you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
|
||||
|
||||
9. Start generating images!
|
||||
|
||||
```
|
||||
# for the pre-release weights
|
||||
python scripts\dream.py -l
|
||||
|
||||
# for the post-release weights
|
||||
python scripts\dream.py
|
||||
```
|
||||
|
||||
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the stable-diffusion directory (step 5, `cd \path\to\stable-diffusion`), run `conda activate ldm` (step 6b), and then launch the dream script (step 9).
|
||||
|
||||
**Note:** Tildebyte has written an alternative ["Easy peasy Windows
|
||||
install"](https://github.com/lstein/stable-diffusion/wiki/Easy-peasy-Windows-install)
|
||||
which uses the Windows Powershell and pew. If you are having trouble with Anaconda on Windows, give this a try (or try it first!)
|
||||
|
||||
### Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter `stable-diffusion`, and type:
|
||||
|
||||
```
|
||||
git pull
|
||||
```
|
||||
|
||||
This will bring your local copy into sync with the remote one.
|
@ -78,52 +78,50 @@ def run_gfpgan(image, strength, seed, upsampler_scale=4):
|
||||
|
||||
def _load_gfpgan_bg_upsampler(bg_upsampler, upsampler_scale, bg_tile=400):
|
||||
if bg_upsampler == 'realesrgan':
|
||||
if not torch.cuda.is_available(): # CPU
|
||||
warnings.warn(
|
||||
'The unoptimized RealESRGAN is slow on CPU. We do not use it. '
|
||||
'If you really want to use it, please modify the corresponding codes.'
|
||||
)
|
||||
bg_upsampler = None
|
||||
if not torch.cuda.is_available(): # CPU or MPS on M1
|
||||
use_half_precision = False
|
||||
else:
|
||||
model_path = {
|
||||
2: 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',
|
||||
4: 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth',
|
||||
}
|
||||
use_half_precision = True
|
||||
|
||||
if upsampler_scale not in model_path:
|
||||
return None
|
||||
model_path = {
|
||||
2: 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',
|
||||
4: 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth',
|
||||
}
|
||||
|
||||
from basicsr.archs.rrdbnet_arch import RRDBNet
|
||||
from realesrgan import RealESRGANer
|
||||
if upsampler_scale not in model_path:
|
||||
return None
|
||||
|
||||
if upsampler_scale == 4:
|
||||
model = RRDBNet(
|
||||
num_in_ch=3,
|
||||
num_out_ch=3,
|
||||
num_feat=64,
|
||||
num_block=23,
|
||||
num_grow_ch=32,
|
||||
scale=4,
|
||||
)
|
||||
if upsampler_scale == 2:
|
||||
model = RRDBNet(
|
||||
num_in_ch=3,
|
||||
num_out_ch=3,
|
||||
num_feat=64,
|
||||
num_block=23,
|
||||
num_grow_ch=32,
|
||||
scale=2,
|
||||
)
|
||||
from basicsr.archs.rrdbnet_arch import RRDBNet
|
||||
from realesrgan import RealESRGANer
|
||||
|
||||
bg_upsampler = RealESRGANer(
|
||||
scale=upsampler_scale,
|
||||
model_path=model_path[upsampler_scale],
|
||||
model=model,
|
||||
tile=bg_tile,
|
||||
tile_pad=10,
|
||||
pre_pad=0,
|
||||
half=True,
|
||||
) # need to set False in CPU mode
|
||||
if upsampler_scale == 4:
|
||||
model = RRDBNet(
|
||||
num_in_ch=3,
|
||||
num_out_ch=3,
|
||||
num_feat=64,
|
||||
num_block=23,
|
||||
num_grow_ch=32,
|
||||
scale=4,
|
||||
)
|
||||
if upsampler_scale == 2:
|
||||
model = RRDBNet(
|
||||
num_in_ch=3,
|
||||
num_out_ch=3,
|
||||
num_feat=64,
|
||||
num_block=23,
|
||||
num_grow_ch=32,
|
||||
scale=2,
|
||||
)
|
||||
|
||||
bg_upsampler = RealESRGANer(
|
||||
scale=upsampler_scale,
|
||||
model_path=model_path[upsampler_scale],
|
||||
model=model,
|
||||
tile=bg_tile,
|
||||
tile_pad=10,
|
||||
pre_pad=0,
|
||||
half=use_half_precision,
|
||||
)
|
||||
else:
|
||||
bg_upsampler = None
|
||||
|
||||
|