docs: clean up and update lots of stuff

This commit is contained in:
psychedelicious 2024-09-22 23:53:33 +10:00 committed by Kent Keirsey
parent 0dcb4dbc54
commit f3f88dba47
18 changed files with 354 additions and 1209 deletions

View File

@ -1,815 +0,0 @@
---
title: Changelog
---
# :octicons-log-16: **Changelog**
## v2.3.5 <small>(22 May 2023)</small>
This release (along with the post1 and post2 follow-on releases) expands support for additional LoRA and LyCORIS models, upgrades diffusers versions, and fixes a few bugs.
### LoRA and LyCORIS Support Improvement
A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
Support for the newer LoKR LyCORIS files has been added.
### Library Updates and Speed/Reproducibility Advancements
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
Here are the new library versions:
Library Version
Torch 2.0.0
Diffusers 0.16.1
Xformers 0.0.19
Compel 1.1.5
Other Improvements
### Performance Improvements
When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
### Bug Fixes
The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running
## v2.3.4 <small>(7 April 2023)</small>
What's New in 2.3.4
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
### LoRA and LyCORIS Support
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
To use LoRA/LyCORIS models in InvokeAI:
Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
You can change the location of the loras directory by passing the --lora_directory option to `invokeai.
### New WebUI LoRA and Textual Inversion Buttons
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
### Minor features and fixes
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.
### Known Bugs in 2.3.4
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.3 <small>(28 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.2 the following bugs have been fixed:
Bugs
When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
The batch script log file names have been fixed to be compatible with Windows.
Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
Support loading of legacy config files that have no personalization (textual inversion) section.
An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
Enhancements
It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
### Known Bugs in 2.3.3
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.2 <small>(11 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.1 the following bugs have been fixed:
Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
Crashes that occurred during model merging.
Restore previous naming of Stable Diffusion base and 768 models.
Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
New "Invokeai-batch" script
### Invoke AI Batch
2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
a shack in the mountains, photograph
a shack in the mountains, watercolor
a shack in the mountains, oil painting
a chalet in the mountains, photograph
a chalet in the mountains, watercolor
a chalet in the mountains, oil painting
a shack in the desert, photograph
...
If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.
### Known Bugs in 2.3.2
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.
## v2.3.1 <small>(22 February 2023)</small>
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
### Enhanced support for model management
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
Using the Model Installer App
Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command invokeai-model-install.
Using the Command Line Client (CLI)
The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see INSTALLING MODELS for more information on model management.
### An Improved Installer Experience
The installer now launches a console-based UI for setting and changing commonly-used startup options:
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
Command-line users can launch the new configure app using invokeai-configure.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
Command-line users can run this interface by typing invokeai-configure
### Image Symmetry Options
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
### A New Unified Canvas Look
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
Model conversion and merging within the WebUI
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
An easier way to contribute translations to the WebUI
We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
Numerous internal bugfixes and performance issues
### Bug Fixes
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.
Summary of InvokeAI command line scripts (all accessible via the launcher menu)
Command Description
invokeai Command line interface
invokeai --web Web interface
invokeai-model-install Model installer with console forms-based front end
invokeai-ti --gui Textual inversion, with a console forms-based front end
invokeai-merge --gui Model merging, with a console forms-based front end
invokeai-configure Startup configuration; can also be used to reinstall support models
invokeai-update InvokeAI software updater
### Known Bugs in 2.3.1
These are known bugs in the release.
MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issu...
## v2.3.0 <small>(15 January 2023)</small>
**Transition to diffusers
Version 2.3 provides support for both the traditional `.ckpt` weight
checkpoint files as well as the HuggingFace `diffusers` format. This
introduces several changes you should know about.
1. The models.yaml format has been updated. There are now two
different type of configuration stanza. The traditional ckpt
one will look like this, with a `format` of `ckpt` and a
`weights` field that points to the absolute or ROOTDIR-relative
location of the ckpt file.
```
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: ckpt
width: 512
height: 512
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
```
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
with a `format` of `diffusers` and a `repo_id` that points to the
repository ID of the model on HuggingFace:
```
stable-diffusion-2.1:
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
```
A configuration stanza for a diffuers model stored locally should
look like this, with a `format` of `diffusers`, but a `path` field
that points at the directory that contains `model_index.json`:
```
waifu-diffusion:
description: Latest waifu diffusion 1.4
format: diffusers
path: models/diffusers/hakurei-haifu-diffusion-1.4
```
2. In order of precedence, InvokeAI will now use HF_HOME, then
XDG_CACHE_HOME, then finally default to `ROOTDIR/models` to
store HuggingFace diffusers models.
Consequently, the format of the models directory has changed to
mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
are not set, diffusers models are now automatically downloaded
and retrieved from the directory `ROOTDIR/models/diffusers`,
while other models are stored in the directory
`ROOTDIR/models/hub`. This organization is the same as that used
by HuggingFace for its cache management.
This allows you to share diffusers and ckpt model files easily with
other machine learning applications that use the HuggingFace
libraries. To do this, set the environment variable HF_HOME
before starting up InvokeAI to tell it what directory to
cache models in. To tell InvokeAI to use the standard HuggingFace
cache directory, you would set HF_HOME like this (Linux/Mac):
`export HF_HOME=~/.cache/huggingface`
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
environment variable if HF_HOME is not set; this path
takes precedence over `ROOTDIR/models` to allow for the same sharing
with other machine learning applications that use HuggingFace
libraries.
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
will be a one-time migration from the old models directory format
to the new one. You will see a message about this the first time
you start `invoke.py`.
4. Both the front end back ends of the model manager have been
rewritten to accommodate diffusers. You can import models using
their local file path, using their URLs, or their HuggingFace
repo_ids. On the command line, all these syntaxes work:
```
!import_model stabilityai/stable-diffusion-2-1-base
!import_model /opt/sd-models/sd-1.4.ckpt
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
```
**KNOWN BUGS (15 January 2023)
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
stable-diffusion-2.1 models can only be run as `diffusers` models
when the `xformer` library is installed and configured. Without
`xformers`, InvokeAI returns black images.
2. Inpainting and outpainting have regressed in quality.
Both these issues are being actively worked on.
## v2.2.4 <small>(11 December 2022)</small>
**the `invokeai` directory**
Previously there were two directories to worry about, the directory that
contained the InvokeAI source code and the launcher scripts, and the `invokeai`
directory that contained the models files, embeddings, configuration and
outputs. With the 2.2.4 release, this dual system is done away with, and
everything, including the `invoke.bat` and `invoke.sh` launcher scripts, now
live in a directory named `invokeai`. By default this directory is located in
your home directory (e.g. `\Users\yourname` on Windows), but you can select
where it goes at install time.
After installation, you can delete the install directory (the one that the zip
file creates when it unpacks). Do **not** delete or move the `invokeai`
directory!
**Initialization file `invokeai/invokeai.init`**
You can place frequently-used startup options in this file, such as the default
number of steps or your preferred sampler. To keep everything in one place, this
file has now been moved into the `invokeai` directory and is named
`invokeai.init`.
**To update from Version 2.2.3**
The easiest route is to download and unpack one of the 2.2.4 installer files.
When it asks you for the location of the `invokeai` runtime directory, respond
with the path to the directory that contains your 2.2.3 `invokeai`. That is, if
`invokeai` lives at `C:\Users\fred\invokeai`, then answer with `C:\Users\fred`
and answer "Y" when asked if you want to reuse the directory.
The `update.sh` (`update.bat`) script that came with the 2.2.3 source installer
does not know about the new directory layout and won't be fully functional.
**To update to 2.2.5 (and beyond) there's now an update path**
As they become available, you can update to more recent versions of InvokeAI
using an `update.sh` (`update.bat`) script located in the `invokeai` directory.
Running it without any arguments will install the most recent version of
InvokeAI. Alternatively, you can get set releases by running the `update.sh`
script with an argument in the command shell. This syntax accepts the path to
the desired release's zip file, which you can find by clicking on the green
"Code" button on this repository's home page.
**Other 2.2.4 Improvements**
- Fix InvokeAI GUI initialization by @addianto in #1687
- fix link in documentation by @lstein in #1728
- Fix broken link by @ShawnZhong in #1736
- Remove reference to binary installer by @lstein in #1731
- documentation fixes for 2.2.3 by @lstein in #1740
- Modify installer links to point closer to the source installer by @ebr in
#1745
- add documentation warning about 1650/60 cards by @lstein in #1753
- Fix Linux source URL in installation docs by @andybearman in #1756
- Make install instructions discoverable in readme by @damian0815 in #1752
- typo fix by @ofirkris in #1755
- Non-interactive model download (support HUGGINGFACE_TOKEN) by @ebr in #1578
- fix(srcinstall): shell installer - cp scripts instead of linking by @tildebyte
in #1765
- stability and usage improvements to binary & source installers by @lstein in
#1760
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
- invoke script cds to its location before running by @lstein in #1805
- Make PaperCut and VoxelArt models load again by @lstein in #1730
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in
#1817
- Clean up readme by @hipsterusername in #1820
- Optimized Docker build with support for external working directory by @ebr in
#1544
- disable pushing the cloud container by @mauwii in #1831
- Fix docker push github action and expand with additional metadata by @ebr in
#1837
- Fix Broken Link To Notebook by @VedantMadane in #1821
- Account for flat models by @spezialspezial in #1766
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by
@SammCheese in #1848
- Make force free GPU memory work in img2img by @addianto in #1844
- New installer by @lstein
## v2.2.3 <small>(2 December 2022)</small>
!!! Note
This point release removes references to the binary installer from the
installation guide. The binary installer is not stable at the current
time. First time users are encouraged to use the "source" installer as
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
potential for users to create and iterate on their creations. The following
sections describe what's new for InvokeAI.
## v2.2.2 <small>(30 November 2022)</small>
!!! note
The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.****
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](https://invoke-ai.github.io/InvokeAI/features/UNIFIED_CANVAS/).
This new workflow is the biggest enhancement added to the WebUI to date, and
unlocks a stunning amount of potential for users to create and iterate on their
creations. The following sections describe what's new for InvokeAI.
## v2.2.0 <small>(2 December 2022)</small>
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
potential for users to create and iterate on their creations. The following
sections describe what's new for InvokeAI.
## v2.1.3 <small>(13 November 2022)</small>
- A choice of installer scripts that automate installation and configuration.
See
[Installation](installation/INSTALLATION.md).
- A streamlined manual installation process that works for both Conda and
PIP-only installs. See
[Manual Installation](installation/020_INSTALL_MANUAL.md).
- The ability to save frequently-used startup options (model to load, steps,
sampler, etc) in a `.invokeai` file. See
[Client](deprecated/CLI.md)
- Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
## v2.1.0 <small>(2 November 2022)</small>
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
@skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation
warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in
#1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Update generate.py by @unreleased in #1109
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in #1125
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Fix broken doc links, fix malaprop in the project subtitle by @majick in #1131
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- Update gitignore to ignore codeformer weights at new location by
@spezialspezial in #1136
- fix links to point to invoke-ai.github.io #1117 by @mauwii in #1143
- Rework-mkdocs by @mauwii in #1144
- add option to CLI and pngwriter that allows user to set PNG compression level
by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Fix gh actions by @mauwii in #1128
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
@skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation
warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in
#1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- add option to CLI and pngwriter that allows user to set PNG compression level
by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Add text prompt to inpaint mask support by @lstein in #1133
- Respect http[s] protocol when making socket.io middleware by @damian0815 in
#976
- WebUI: Adds Codeformer support by @psychedelicious in #1151
- Skips normalizing prompts for web UI metadata by @psychedelicious in #1165
- Add Asymmetric Tiling by @carson-katri in #1132
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in #1172
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious
in #1175
- Flips channels using array slicing instead of using OpenCV by @psychedelicious
in #1178
- Fix typo in docs: s/Formally/Formerly by @noodlebox in #1176
- fix clipseg loading problems by @lstein in #1177
- Correct color channels in upscale using array slicing by @wfng92 in #1181
- Web UI: Filters existing images when adding new images; Fixes #1085 by
@psychedelicious in #1171
- fix a number of bugs in textual inversion by @lstein in #1190
- Improve !fetch, add !replay command by @ArDiouscuros in #882
- Fix generation of image with s>1000 by @holstvoogd in #951
- Web UI: Gallery improvements by @psychedelicious in #1198
- Update CLI.md by @krummrey in #1211
- outcropping improvements by @lstein in #1207
- add support for loading VAE autoencoders by @lstein in #1216
- remove duplicate fix_func for MPS by @wfng92 in #1210
- Metadata storage and retrieval fixes by @lstein in #1204
- nix: add shell.nix file by @Cloudef in #1170
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in #1185
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in #1187
- Allow user to generate images with initial noise as on M1 / mps system by
@ArDiouscuros in #981
- feat: adding filename format template by @plucked in #968
- Web UI: Fixes broken bundle by @psychedelicious in #1242
- Support runwayML custom inpainting model by @lstein in #1243
- Update IMG2IMG.md by @talitore in #1262
- New dockerfile - including a build- and a run- script as well as a GH-Action
by @mauwii in #1233
- cut over from karras to model noise schedule for higher steps by @lstein in
#1222
- Prompt tweaks by @lstein in #1268
- Outpainting implementation by @Kyle0654 in #1251
- fixing aspect ratio on hires by @tjennings in #1249
- Fix-build-container-action by @mauwii in #1274
- handle all unicode characters by @damian0815 in #1276
- adds models.user.yml to .gitignore by @JakeHL in #1281
- remove debug branch, set fail-fast to false by @mauwii in #1284
- Protect-secrets-on-pr by @mauwii in #1285
- Web UI: Adds initial inpainting implementation by @psychedelicious in #1225
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in #1289
- Use proper authentication to download model by @mauwii in #1287
- Prevent indexing error for mode RGB by @spezialspezial in #1294
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
unecesarry caches by @mauwii in #1293
- add --no-interactive to configure_invokeai step by @mauwii in #1302
- 1-click installer and updater. Uses micromamba to install git and conda into a
contained environment (if necessary) before running the normal installation
script by @cmdr2 in #1253
- configure_invokeai.py script downloads the weight files by @lstein in #1290
## v2.0.1 <small>(13 October 2022)</small>
- fix noisy images at high step count when using k\* samplers
- dream.py script now calls invoke.py module directly rather than via a new
python process (which could break the environment)
## v2.0.0 <small>(9 October 2022)</small>
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- img2img runs on all k\* samplers
- Support for
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for
[post-processing of previously-generated images](features/POSTPROCESS.md)
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows
[larger images to be created without duplicating elements](deprecated/CLI.md#this-is-an-example-of-txt2img),
at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control
variation during image generation (see
[Thresholding and Perlin Noise Initialization](features/OTHER.md#thresholding-and-perlin-noise-initialization-options))
- Extensive metadata now written into PNG files, allowing reliable regeneration
of images and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
platforms.
- Improved [command-line completion behavior](deprecated/CLI.md) New commands
added:
- List command-line history with `!history`
- Search command-line history with `!search`
- Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like
`--precision=float32`.
## v1.14 <small>(11 September 2022)</small>
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
## v1.13 <small>(3 September 2022)</small>
- Support image variations (see [VARIATIONS](deprecated/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and
reviewers)
- Supports a Google Colab notebook for a standalone server running on Google
hardware [Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.
---
## v1.12 <small>(28 August 2022)</small>
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the invoke.py script. Invoke by adding
--web to the invoke.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both
[Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the invoke> command line.
[Blessedcoolant](https://github.com/blessedcoolant)
---
## v1.11 <small>(26 August 2022)</small>
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module.
(kudos to [Oceanswave](https://github.com/Oceanswave)
- You now can specify a seed of -1 to use the previous image's seed, -2 to use
the seed for the image generated before that, etc. Seed memory only extends
back to the previous command, but will work on all images generated with the
-n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-invoke** which adds
experimental support for iteratively modifying the prompt and its parameters.
Please
see[Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
a synopsis of how this works. Note that when this feature is eventually added
to the main branch, it will may be modified significantly.
---
## v1.10 <small>(25 August 2022)</small>
- A barebones but fully functional interactive web server for online generation
of txt2img and img2img.
---
## v1.09 <small>(24 August 2022)</small>
- A new -v option allows you to generate multiple variants of an initial image
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave).
[ See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
- Added ability to personalize text to image generation (kudos to
[Oceanswave](https://github.com/Oceanswave) and
[nicolai256](https://github.com/nicolai256))
- Enabled all of the samplers from k_diffusion
---
## v1.08 <small>(24 August 2022)</small>
- Escape single quotes on the invoke> command before trying to parse. This
avoids parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
- Added bounds checks for numeric arguments that could cause crashes.
- Cleaned up the copyright and license agreement files.
---
## v1.07 <small>(23 August 2022)</small>
- Image filenames will now never fill gaps in the sequence, but will be assigned
the next higher name in the chosen directory. This ensures that the alphabetic
and chronological sort orders are the same.
---
## v1.06 <small>(23 August 2022)</small>
- Added weighted prompt support contributed by
[xraxra](https://github.com/xraxra)
- Example of using weighted prompts to tweak a demonic figure contributed by
[bmaltais](https://github.com/bmaltais)
---
## v1.05 <small>(22 August 2022 - after the drop)</small>
- Filenames now use the following formats: 000010.95183149.png -- Two files
produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished
by a different seed.
000011.455191342.01.png -- Two files produced by the same command using
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid
can be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
retrieve the path of the output directory.
---
## v1.04 <small>(22 August 2022 - after the drop)</small>
- Updated README to reflect installation of the released weights.
- Suppressed very noisy and inconsequential warning when loading the frozen CLIP
tokenizer.
---
## v1.03 <small>(22 August 2022)</small>
- The original txt2img and img2img scripts from the CompViz repository have been
moved into a subfolder named "orig_scripts", to reduce confusion.
---
## v1.02 <small>(21 August 2022)</small>
- A copy of the prompt and all of its switches and options is now stored in the
corresponding image in a tEXt metadata field named "Dream". You can read the
prompt using scripts/images2prompt.py, or an image editor that allows you to
explore the full metadata. **Please run "conda env update" to load the k_lms
dependencies!!**
---
## v1.01 <small>(21 August 2022)</small>
- added k_lms sampling. **Please run "conda env update" to load the k_lms
dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and
lower memory requirements Pass argument --full_precision to invoke.py to get
slower but more accurate image generation
---
## Links
- **[Read Me](index.md)**

View File

@ -1,21 +1,10 @@
# Local Development
If you are looking to contribute you will need to have a local development
environment. See the
[Developer Install](../installation/020_INSTALL_MANUAL.md#developer-install) for
full details.
Broadly this involves cloning the repository, installing the pre-reqs, and
InvokeAI (in editable form). Assuming this is working, choose your area of
focus.
If you want to contribute, you will need to set up a [local development environment](./dev-environment.md).
## Documentation
We use [mkdocs](https://www.mkdocs.org) for our documentation with the
[material theme](https://squidfunk.github.io/mkdocs-material/). Documentation is
written in markdown files under the `./docs` folder and then built into a static
website for hosting with GitHub Pages at
[invoke-ai.github.io/InvokeAI](https://invoke-ai.github.io/InvokeAI).
We use [mkdocs](https://www.mkdocs.org) for our documentation with the [material theme](https://squidfunk.github.io/mkdocs-material/). Documentation is written in markdown files under the `./docs` folder and then built into a static website for hosting with GitHub Pages at [invoke-ai.github.io/InvokeAI](https://invoke-ai.github.io/InvokeAI).
To contribute to the documentation you'll need to install the dependencies. Note
the use of `"`.
@ -50,6 +39,7 @@ and will be required for testing the changes you make to the code.
### Tests
See the [tests documentation](./TESTS.md) for information about running and writing tests.
### Reloading Changes
Experimenting with changes to the Python source code is a drag if you have to re-start the server —
@ -63,7 +53,6 @@ running server on the fly.
This will allow you to avoid restarting the server (and reloading models) in most cases, but there are some caveats; see
the [jurigged documentation](https://github.com/breuleux/jurigged#caveats) for details.
## Front End
<!--#TODO: get input from blessedcoolant here, for the moment inserted the frontend README via snippets extension.-->

View File

@ -1,47 +1,56 @@
# New Contributor Guide
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../../installation/020_INSTALL_MANUAL.md#developer-install)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../dev-environment.md)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Make your first Pull Request with the guide below
- [x] Happy development! Don't be afraid to ask for help - we're happy to help you contribute!
## How do I make a contribution?
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface. 4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add -A
```
```bash
git add -A
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```bash
git push origin branch-name-here
```
```bash
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository. If you're not sure how to, [follow this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
@ -49,20 +58,20 @@ git push origin branch-name-here
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## Best Practices
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
- Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
- Comments! Commenting your code helps reviewers easily understand your contribution
- Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
- Make all communications public. This ensure knowledge is shared with the whole community
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.

View File

@ -0,0 +1,99 @@
# Dev Environment
To make changes to Invoke's backend, frontend, or documentation, you'll need to set up a dev environment.
If you just want to use Invoke, you should use the [installer][installer link].
!!! info "Why do I need the frontend toolchain?"
The repo doesn't contain a build of the frontend. You'll be responsible for rebuilding it every time you pull in new changes, or run it in dev mode (which incurs a substantial performance penalty).
!!! warning
Invoke uses a SQLite database. When you run the application as a dev install, you accept responsibility for your database. This means making regular backups (especially before pulling) and/or fixing it yourself in the event that a PR introduces a schema change.
If you don't need to persist your db, you can use an ephemeral in-memory database by setting `use_memory_db: true` in your `invokeai.yaml` file. You'll also want to set `scan_models_on_startup: true` so that your models are registered on startup.
## Setup
1. Run through the [requirements][requirements link].
1. [Fork and clone][forking link] the [InvokeAI repo][repo link].
1. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
1. Create a python virtual environment inside the directory you just created:
```sh
python3 -m venv .venv --prompt InvokeAI-Dev
```
1. Activate the venv (you'll need to do this every time you want to run the app):
```sh
source .venv/bin/activate
```
1. Install the repo as an [editable install][editable install link]:
```sh
pip install -e ".[dev,test,xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
1. Install the frontend dev toolchain:
- [`nodejs`](https://nodejs.org/) (recommend v20 LTS)
- [`pnpm`](https://pnpm.io/installation#installing-a-specific-version) (must be v8 - not v9!)
1. Do a production build of the frontend:
```sh
pnpm build
```
1. Start the application:
```sh
python scripts/invokeai-web.py
```
1. Access the UI at `localhost:9090`.
## Updating the UI
You'll need to run `pnpm build` every time you pull in new changes. Another option is to skip the build and instead run the app in dev mode:
```sh
pnpm dev
```
This starts a dev server at `localhost:5173`, which you will use instead of `localhost:9090`.
The dev mode is substantially slower than the production build but may be more convenient if you just need to test things out.
## Documentation
The documentation is built with `mkdocs`. To preview it locally, you need a additional set of packages installed.
```sh
# after activating the venv
pip install -e ".[docs]"
```
Then, you can start a live docs dev server, which will auto-refresh when you edit the docs:
```sh
mkdocs serve
```
On macOS and Linux, there is a `make` target for this:
```sh
make docs
```
[installer link]: ../installation/installer.md
[forking link]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
[requirements link]: ../installation/requirements.md
[repo link]: https://github.com/invoke-ai/InvokeAI
[manual install link]: ../installation/manual.md
[editable install link]: https://pip.pypa.io/en/latest/cli/pip_install/#cmdoption-e

View File

@ -4,57 +4,42 @@ Invoke's UI is made possible by many contributors and open-source libraries. Tha
## Dev environment
### Setup
Follow the [dev environment](../dev-environment.md) guide to get set up. Run the UI using `pnpm dev`.
1. Install [node] and [pnpm] v8.
2. Run `pnpm i` to install all packages.
#### Run in dev mode
1. From `invokeai/frontend/web/`, run `pnpm dev`.
1. From repo root, run `python scripts/invokeai-web.py`.
1. Point your browser to the dev server address, e.g. <http://localhost:5173/>
### Package scripts
## Package scripts
- `dev`: run the frontend in dev mode, enabling hot reloading
- `build`: run all checks (madge, eslint, prettier, tsc) and then build the frontend
- `build`: run all checks (dpdm, eslint, prettier, tsc, knip) and then build the frontend
- `lint:dpdm`: check circular dependencies
- `lint:eslint`: check code quality
- `lint:prettier`: check code formatting
- `lint:tsc`: check type issues
- `lint:knip`: check for unused exports or objects (failures here are just suggestions, not hard fails)
- `lint:knip`: check for unused exports or objects
- `lint`: run all checks concurrently
- `fix`: run `eslint` and `prettier`, fixing fixable issues
- `test:ui`: run `vitest` with the fancy web UI
### Type generation
## Type generation
We use [openapi-typescript] to generate types from the app's OpenAPI schema.
We use [openapi-typescript] to generate types from the app's OpenAPI schema. The generated types are committed to the repo in [schema.ts].
The generated types are committed to the repo in [schema.ts].
On macOS and Linux, there's a makefile target to regenerate the types. You don't need to have the application running.
```sh
make frontend-typegen
```
Otherwise, you'll need to run the script manually:
If you make backend changes, it's important to regenerate the frontend types:
```sh
cd invokeai/frontend/web && python ../../../scripts/generate_openapi_schema.py | pnpm typegen
```
### Localization
On macOS and Linux, you can run `make frontend-typegen` as a shortcut for the above snippet.
## Localization
We use [i18next] for localization, but translation to languages other than English happens on our [Weblate] project.
Only the English source strings should be changed on this repo.
Only the English source strings (i.e. `en.json`) should be changed on this repo.
### VSCode
## VSCode
#### Example debugger config
### Example debugger config
```jsonc
{
@ -71,7 +56,7 @@ Only the English source strings should be changed on this repo.
}
```
#### Remote dev
### Remote dev
We've noticed an intermittent timeout issue with the VSCode remote dev port forwarding.
@ -87,7 +72,7 @@ Thanks for your interest in contributing to the Invoke Web UI!
Please follow these guidelines when contributing.
### Check in before investing your time
## Check in before investing your time
Please check in before you invest your time on anything besides a trivial fix, in case it conflicts with ongoing work or isn't aligned with the vision for the app.
@ -95,7 +80,7 @@ If a feature request or issue doesn't already exist for the thing you want to wo
Ping `@psychedelicious` on [discord] in the `#frontend-dev` channel or in the feature request / issue you want to work on - we're happy to chat.
### Code conventions
## Code conventions
- This is a fairly complex app with a deep component tree. Please use memoization (`useCallback`, `useMemo`, `memo`) with enthusiasm.
- If you need to add some global, ephemeral state, please use [nanostores] if possible.
@ -103,7 +88,7 @@ Ping `@psychedelicious` on [discord] in the `#frontend-dev` channel or in the fe
- Feel free to use `lodash` (via `lodash-es`) to make the intent of your code clear.
- Please add comments describing the "why", not the "how" (unless it is really arcane).
### Commit format
## Commit format
Please use the [conventional commits] spec for the web UI, with a scope of "ui":
@ -112,7 +97,7 @@ Please use the [conventional commits] spec for the web UI, with a scope of "ui":
- `feat(ui): add some cool new feature`
- `fix(ui): fix some bug`
### Tests
## Tests
We don't do any UI testing at this time, but consider adding tests for sensitive logic.
@ -120,7 +105,7 @@ We use `vitest`, and tests should be next to the file they are testing. If the l
In some situations, we may want to test types. For example, if you use `zod` to create a schema that should match a generated type, it's best to add a test to confirm that the types match. Use `tsafe`'s assert for this.
### Submitting a PR
## Submitting a PR
- Ensure your branch is tidy. Use an interactive rebase to clean up the commit history and reword the commit messages if they are not descriptive.
- Run `pnpm lint`. Some issues are auto-fixable with `pnpm fix`.
@ -133,12 +118,10 @@ In some situations, we may want to test types. For example, if you use `zod` to
- [Workflows - Design and Implementation]
- [State Management]
[node]: https://nodejs.org/en/download/
[pnpm]: https://github.com/pnpm/pnpm
[discord]: https://discord.gg/ZmtBAhwWhy
[i18next]: https://github.com/i18next/react-i18next
[Weblate]: https://hosted.weblate.org/engage/invokeai/
[openapi-typescript]: https://github.com/drwpow/openapi-typescript
[openapi-typescript]: https://github.com/openapi-ts/openapi-typescript
[schema.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/services/api/schema.ts
[conventional commits]: https://www.conventionalcommits.org/en/v1.0.0/
[Workflows - Design and Implementation]: ./workflows.md

View File

@ -2,7 +2,7 @@
!!! info "How to Reinstall"
Many issues can be resolved by re-installing the application. You won't lose any data by re-installing. We suggest downloading the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) and using it to re-install the application. Consult the [installer guide](../installation/010_INSTALL_AUTOMATED.md) for more information.
Many issues can be resolved by re-installing the application. You won't lose any data by re-installing. We suggest downloading the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) and using it to re-install the application. Consult the [installer guide](./installation/installer.md) for more information.
When you run the installer, you'll have an option to select the version to install. If you aren't ready to upgrade, you choose the current version to fix a broken install.
@ -92,7 +92,7 @@ Most example images with prompts that you'll find on the internet have been gene
During a zip file installation or an update, installation stops with an error like this:
![broken-dependency-screenshot](../assets/troubleshooting/broken-dependency.png){:width="800px"}
![broken-dependency-screenshot](./assets/troubleshooting/broken-dependency.png){:width="800px"}
To resolve this, re-install the application as described above.
@ -164,7 +164,7 @@ If you encounter this error, the solution is to remove the package from the `pip
- Choose the developer console option
- Run this command: `pip cache remove controlnet_aux`
- Close the terminal window
- Download and run the [installer](https://github.com/invoke-ai/InvokeAI/releases/latest), selecting your current install location
- Download and run the [installer][latest release], selecting your current install location
## Out of Memory Issues
@ -196,7 +196,7 @@ tips to reduce the problem:
=== "12GB VRAM GPU"
This should be sufficient to generate larger images up to about 1280x1280.
## Checkpoint Models Load Slowly or Use Too Much RAM
The difference between diffusers models (a folder containing multiple
@ -251,10 +251,10 @@ To better understand how the `glibc` memory allocator works, see these reference
Note the differences between memory allocated as chunks in an arena vs. memory allocated with `mmap`. Under `glibc`'s default configuration, most model tensors get allocated as chunks in an arena making them vulnerable to the problem of fragmentation.
[model install docs]: ../installation/050_INSTALLING_MODELS.md
[system requirements]: ../installation/INSTALL_REQUIREMENTS.md
[model install docs]: ./installation/models.md
[system requirements]: ./installation/requirements.md
[latest release]: https://github.com/invoke-ai/InvokeAI/releases/latest
[create an issue]: https://github.com/invoke-ai/InvokeAI/issues
[discord]: https://discord.gg/ZmtBAhwWhy
[configuration docs]: ../features/CONFIGURATION.md
[configuration docs]: ./configuration.md
[access token]: https://huggingface.co/docs/hub/security-tokens#how-to-manage-user-access-tokens

View File

@ -1,62 +0,0 @@
---
title: Overview
---
Here you can find the documentation for InvokeAI's various features.
## The [Getting Started Guide](../help/gettingStartedWithAI)
A getting started guide for those new to AI image generation.
## The Basics
### * The [Web User Interface](WEB.md)
Guide to the Web interface. Also see the [WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md)
### * The [Unified Canvas](UNIFIED_CANVAS.md)
Build complex scenes by combine and modifying multiple images in a stepwise
fashion. This feature combines img2img, inpainting and outpainting in
a single convenient digital artist-optimized user interface.
## Image Generation
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
### * The [LoRA, LyCORIS, LCM-LoRA Models](CONCEPTS.md)
Add custom subjects and styles using a variety of fine-tuned models.
### * [ControlNet](CONTROLNET.md)
Learn how to install and use ControlNet models for fine control over
image output.
### * [Image-to-Image Guide](IMG2IMG.md)
Use a seed image to build new creations.
## Model Management
### * [Model Installation](../installation/050_INSTALLING_MODELS.md)
Learn how to import third-party models and switch among them. This
guide also covers optimizing models to load quickly.
### * [Merging Models](MODEL_MERGING.md)
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
### * [Textual Inversion](TEXTUAL_INVERSIONS.md)
Personalize models by adding your own style or subjects.
## Other Features
### * [The NSFW Checker](WATERMARK+NSFW.md)
Prevent InvokeAI from displaying unwanted racy images.
### * [Controlling Logging](LOGGING.md)
Control how InvokeAI logs status messages.
### * [Command-line Utilities](UTILITIES.md)
A list of the command-line utilities available with InvokeAI.
<!-- OUT OF DATE
### * [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns,
batch process a file of prompts, increase the "creativity" of image
generation by adding initial noise, and more!
-->

View File

@ -1,37 +0,0 @@
# Developer Install
!!! warning
InvokeAI uses a SQLite database. By running on `main`, you accept responsibility for your database. This
means making regular backups (especially before pulling) and/or fixing it yourself in the event that a
PR introduces a schema change.
If you don't need persistent backend storage, you can use an ephemeral in-memory database by setting
`use_memory_db: true` in your `invokeai.yaml` file. You'll also want to set `scan_models_on_startup: true`
so that your models are registered on startup.
If this is untenable, you should run the application via the official installer or a manual install of the
python package from PyPI. These releases will not break your database.
If you have an interest in how InvokeAI works, or you would like to add features or bugfixes, you are encouraged to install the source code for InvokeAI.
!!! info "Why do I need the frontend toolchain?"
The repo doesn't contain a build of the frontend. You'll be responsible for rebuilding it (or running it in dev mode) to use the app, as described in the [frontend dev toolchain] docs.
<h2> Installation </h2>
1. [Fork and clone] the [InvokeAI repo].
1. Follow the [manual installation] docs to create a new virtual environment for the development install.
- Create a new folder outside the repo root for the installation and create the venv inside that folder.
- When installing the InvokeAI package, add `-e` to the command so you get an [editable install].
1. Install the [frontend dev toolchain] and do a production build of the UI as described.
1. You can now run the app as described in the [manual installation] docs.
As described in the [frontend dev toolchain] docs, you can run the UI using a dev server. If you do this, you won't need to continually rebuild the frontend. Instead, you run the dev server and use the app with the server URL it provides.
[Fork and clone]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
[InvokeAI repo]: https://github.com/invoke-ai/InvokeAI
[frontend dev toolchain]: ../contributing/frontend/OVERVIEW.md
[manual installation]: ./020_INSTALL_MANUAL.md
[editable install]: https://pip.pypa.io/en/latest/cli/pip_install/#cmdoption-e

View File

@ -1,12 +1,10 @@
---
title: Installing with Docker
title: Docker
---
# :fontawesome-brands-docker: Docker
!!! warning "macOS users"
Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
Docker can not access the GPU on macOS, so your generation speeds will be slow. Use the [installer](./installer.md) instead.
!!! tip "Linux and Windows Users"
@ -22,7 +20,7 @@ Ensure your Docker setup is able to use your GPU. Then:
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
Once the container starts up, open <http://localhost:9090> in your browser, install some models, and start generating.
## Build-It-Yourself
@ -49,18 +47,18 @@ increase Swap and Disk image size too.
Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../configuration.md) for further detail.
At a minimum, you might want to set the `INVOKEAI_ROOT` environment variable
to point to the location where you wish to store your InvokeAI models, configuration, and outputs.
<figure markdown>
| Environment-Variable <img width="220" align="right"/> | Default value <img width="360" align="right"/> | Description |
| ----------------------------------------------------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `INVOKEAI_ROOT` | `~/invokeai` | **Required** - the location of your InvokeAI root directory. It will be created if it does not exist.
| `HUGGING_FACE_HUB_TOKEN` | | InvokeAI will work without it, but some of the integrations with HuggingFace (like downloading from models from private repositories) may not work|
| `GPU_DRIVER` | `cuda` | Optionally change this to `rocm` to build the image for AMD GPUs. NOTE: Use the `build.sh` script to build the image for this to take effect.
| Environment-Variable <img width="220" align="right"/> | Default value <img width="360" align="right"/> | Description |
| ----------------------------------------------------- | ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| `INVOKEAI_ROOT` | `~/invokeai` | **Required** - the location of your InvokeAI root directory. It will be created if it does not exist. |
| `HUGGING_FACE_HUB_TOKEN` | | InvokeAI will work without it, but some of the integrations with HuggingFace (like downloading from models from private repositories) may not work |
| `GPU_DRIVER` | `cuda` | Optionally change this to `rocm` to build the image for AMD GPUs. NOTE: Use the `build.sh` script to build the image for this to take effect. |
</figure>
@ -69,8 +67,8 @@ to point to the location where you wish to store your InvokeAI models, configura
Use the standard `docker compose build` command from within the `docker` directory.
If using an AMD GPU:
a: set the `GPU_DRIVER=rocm` environment variable in `docker-compose.yml` and continue using `docker compose build` as usual, or
b: set `GPU_DRIVER=rocm` in the `.env` file and use the `build.sh` script, provided for convenience
a: set the `GPU_DRIVER=rocm` environment variable in `docker-compose.yml` and continue using `docker compose build` as usual, or
b: set `GPU_DRIVER=rocm` in the `.env` file and use the `build.sh` script, provided for convenience
#### Run the Container
@ -82,8 +80,8 @@ Once the container starts up (and configures the InvokeAI root directory if this
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
to reset the file to its most recent version.
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
to reset the file to its most recent version.
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)

View File

@ -1,48 +1,36 @@
# Installation and Updating Overview
Before installing, review the [installation requirements] to ensure your system is set up properly.
Before installing, review the [installation requirements](./requirements.md) to ensure your system is set up properly.
See the [FAQ] for frequently-encountered installation issues.
See the [FAQ](../faq.md) for frequently-encountered installation issues.
If you need more help, join our [discord] or [create an issue].
If you need more help, join our [discord](https://discord.gg/ZmtBAhwWhy) or [create a GitHub issue](https://github.com/invoke-ai/InvokeAI/issues).
<h2>Automatic Install & Updates </h2>
## Automated Installer & Updates
✅ The automatic install is the best way to run InvokeAI. Check out the [installation guide] to get started.
✅ The automated [installer](./installer.md) is the best way to install Invoke.
⬆️ The same installer is also the best way to update InvokeAI - Simply rerun it for the same folder you installed to.
⬆️ The same installer is also the best way to update Invoke - simply rerun it for the same folder you installed to.
The installation process simply manages installation for the core libraries & application dependencies that run Invoke.
Any models, images, or other assets in the Invoke root folder won't be affected by the installation process.
<h2>Manual Install</h2>
Models, images, or other assets in the Invoke root folder won't be affected by the installation process.
If you are familiar with python and want more control over the packages that are installed, you can [install InvokeAI manually via PyPI].
## Manual Install
If you are familiar with python and want more control over the packages that are installed, you can [install Invoke manually via PyPI](./manual.md).
Updates are managed by reinstalling the latest version through PyPi.
<h2>Developer Install</h2>
## Developer Install
If you want to contribute to InvokeAI, consult the [developer install guide].
If you want to contribute to InvokeAI, you'll need to set up a [dev environment](../contributing/dev-environment.md).
<h2>Docker Install</h2>
## Docker
This method is recommended for those familiar with running Docker containers.
Invoke publishes docker images. See the [docker installation guide](./docker.md) for details.
We offer a method for creating Docker containers containing InvokeAI and its dependencies. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install.
## Other Installation Guides
See the [docker installation guide].
<h2>Other Installation Guides</h2>
- [PyPatchMatch](060_INSTALL_PATCHMATCH.md)
- [Installing Models](050_INSTALLING_MODELS.md)
[install InvokeAI manually via PyPI]: 020_INSTALL_MANUAL.md
[developer install guide]: INSTALL_DEVELOPMENT.md
[docker installation guide]: 040_INSTALL_DOCKER.md
[installation guide]: 010_INSTALL_AUTOMATED.md
[FAQ]: ../help/FAQ.md
[discord]: discord.gg/invoke-ai
[create an issue]: https://github.com/invoke-ai/InvokeAI/issues
[installation requirements]: INSTALL_REQUIREMENTS.md
- [PyPatchMatch](./patchmatch.md)
- [Installing Models](./models.md)

View File

@ -74,12 +74,18 @@ Find the install location you selected earlier. Double-click the launcher script
Choose the first option to run the UI. After a series of startup messages, you'll see something like this:
```
```sh
Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
```
Copy the URL into your browser and you should see the UI.
## Improved Outpainting with PatchMatch
PatchMatch is an extra add-on that can improve outpainting. Windows users are in luck - it works out of the box.
On macOS and Linux, a few extra steps are needed to set it up. See the [PatchMatch installation guide](./patchmatch.md).
## First-time Setup
You will need to [install some models] before you can generate.
@ -100,10 +106,10 @@ Updating is exactly the same as installing - download the latest installer, choo
If you have installation issues, please review the [FAQ]. You can also [create an issue] or ask for help on [discord].
[installation requirements]: INSTALL_REQUIREMENTS.md
[FAQ]: ../help/FAQ.md
[install some models]: 050_INSTALLING_MODELS.md
[configuration docs]: ../features/CONFIGURATION.md
[installation requirements]: ./requirements.md
[FAQ]: ../faq.md
[install some models]: ./models.md
[configuration docs]: ../configuration.md
[latest release]: https://github.com/invoke-ai/InvokeAI/releases/latest
[create an issue]: https://github.com/invoke-ai/InvokeAI/issues
[discord]: https://discord.gg/ZmtBAhwWhy

View File

@ -1,40 +1,36 @@
# Manual Install
!!! warning "This is for Advanced Users"
!!! warning
**Python experience is mandatory.**
## Introduction
If you want to use Invoke locally, you should probably use the [installer](./installer.md).
If you want to contribute to Invoke, instead follow the [dev environment](../contributing/dev-environment.md) guide.
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer and launcher that you'll need to manage manually, described in this guide.
### Requirements
## Requirements
Before you start, go through the [installation requirements](./INSTALL_REQUIREMENTS.md).
Before you start, go through the [installation requirements](./requirements.md).
### Installation Walkthrough
## Walkthrough
1. Create a directory to contain your InvokeAI library, configuration
files, and models. This is known as the "runtime" or "root"
directory, and often lives in your home directory under the name `invokeai`.
We will refer to this directory as `INVOKEAI_ROOT`. For convenience, create an environment variable pointing to the directory.
1. Create a directory to contain your InvokeAI library, configuration files, and models. This is known as the "runtime" or "root" directory, and typically lives in your home directory under the name `invokeai`.
=== "Linux/macOS"
```bash
export INVOKEAI_ROOT=~/invokeai
mkdir $INVOKEAI_ROOT
mkdir ~/invokeai
```
=== "Windows (PowerShell)"
```bash
Set-Variable -Name INVOKEAI_ROOT -Value $Home/invokeai
mkdir $INVOKEAI_ROOT
mkdir $Home/invokeai
```
1. Enter the root (invokeai) directory and create a virtual Python environment within it named `.venv`.
1. Enter the root directory and create a virtual Python environment within it named `.venv`.
!!! warning "Virtual Environment Location"
@ -42,10 +38,19 @@ Before you start, go through the [installation requirements](./INSTALL_REQUIREME
If you choose a different location for the venv, then you _must_ set the `INVOKEAI_ROOT` environment variable or specify the root directory using the `--root` CLI arg.
```terminal
cd $INVOKEAI_ROOT
python3 -m venv .venv --prompt InvokeAI
```
=== "Linux/macOS"
```bash
cd ~/invokeai
python3 -m venv .venv --prompt InvokeAI
```
=== "Windows (PowerShell)"
```bash
cd $Home/invokeai
python3 -m venv .venv --prompt InvokeAI
```
1. Activate the new environment:
@ -63,13 +68,11 @@ Before you start, go through the [installation requirements](./INSTALL_REQUIREME
!!! info "Permissions Error (Windows)"
If you get a permissions error at this point, run this command and try again
If you get a permissions error at this point, run this command and try again.
`Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser`
The command-line prompt should change to to show `(InvokeAI)` at the beginning of the prompt.
The following steps should be run while inside the `INVOKEAI_ROOT` directory.
The command-line prompt should change to to show `(InvokeAI)`, indicating the venv is active.
1. Make sure that pip is installed in your virtual environment and up to date:
@ -81,21 +84,17 @@ Before you start, go through the [installation requirements](./INSTALL_REQUIREME
- You may need to provide an [extra index URL](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-extra-index-url). Select your platform configuration using [this tool on the PyTorch website](https://pytorch.org/get-started/locally/). Copy the `--extra-index-url` string from this and append it to your install command.
!!! example "Install with an extra index URL"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
- If you have a CUDA GPU and want to install with `xformers`, you need to add an option to the package name. Note that `xformers` is not strictly necessary. PyTorch includes an implementation of the SDP attention algorithm with similar performance for most GPUs.
- If you have a CUDA GPU and want to install with `xformers`, you need to add an option to the package name. Note that `xformers` is not necessary. PyTorch includes an implementation of the SDP attention algorithm with the same performance.
```bash
pip install "InvokeAI[xformers]" --use-pep517
```
!!! example "Install with `xformers`"
```bash
pip install "InvokeAI[xformers]" --use-pep517
```
1. Deactivate and reactivate your runtime directory so that the invokeai-specific commands become available in the environment:
1. Deactivate and reactivate your venv so that the invokeai-specific commands become available in the environment:
=== "Linux/macOS"

View File

@ -1,4 +1,4 @@
# Installing Models
# Models
## Checkpoint and Diffusers Models
@ -15,7 +15,7 @@ Today, there are thousands of models, fine tuned to excel at specific styles, ge
- `safetensors`: Single file, like `.ckpt` files. Prevents malware from lurking in a model.
- `diffusers`: Splits the model components into separate files, allowing very fast loading.
InvokeAI supports all three formats. Our backend will convert models to `diffusers` format before running them. This is a transparent process.
InvokeAI supports all three formats.
## Starter Models
@ -27,7 +27,7 @@ Some models carry license terms that limit their use in commercial applications
## Other Models
You can install other models using the Model Manager. You'll find tabs for the following install methods:
There are a few ways to install other models:
- **URL or Local Path**: Provide the path to a model on your computer, or a direct link to the model. Some sites require you to use an API token to download models, which you can [set up in the config file].
- **HuggingFace**: Paste a HF Repo ID to install it. If there are multiple models in the repo, you'll get a list to choose from. Repo IDs look like this: `XpucT/Deliberate`. There is a copy button on each repo to copy the ID.
@ -49,4 +49,4 @@ In this situation, you may need to provide some additional information to identi
Add `:v2` to the repo ID and use that when installing the model: `monster-labs/control_v1p_sd15_qrcode_monster:v2`
[set up in the config file]: ../../features/CONFIGURATION#model-marketplace-api-keys
[set up in the config file]: ../configuration.md#model-marketplace-api-keys

View File

@ -2,14 +2,25 @@
title: Installing PyPatchMatch
---
# :material-image-size-select-large: Installing PyPatchMatch
PatchMatch is an algorithm used to infill images. It can greatly improve outpainting results. PyPatchMatch is a python wrapper around a C++ implementation of the algorithm.
pypatchmatch is a Python module for inpainting images. It is not needed to run
InvokeAI, but it greatly improves the quality of inpainting and outpainting and
is recommended.
It uses the image data around the target area as a reference to generate new image data of a similar character and quality.
Unfortunately, it is a C++ optimized module and installation can be somewhat
challenging. This guide leads you through the steps.
## Why Use PatchMatch
In the context of image generation, "outpainting" refers to filling in a transparent area using AI-generated image data. But the AI can't generate without some initial data. We need to first fill in the transparent area with _something_.
The first step in "outpainting" then, is to fill in the transparent area with something. Generally, you get better results when that initial infill resembles the rest of the image.
Because PatchMatch generates image data so similar to the rest of the image, it works very well as the first step in outpainting, typically producing better results than other infill methods supported by Invoke (e.g. LaMA, cv2 infill, random tiles).
### Performance Caveat
PatchMatch is CPU-bound, and the amount of time it takes increases proportionally as the infill area increases. While the numbers certainly vary depending on system specs, you can expect a noticeable slowdown once you start infilling areas around 512x512 pixels. 1024x1024 pixels can take several seconds to infill.
## Installation
Unfortunately, installation can be somewhat challenging, as it requires some things that `pip` cannot install for you.
## Windows
@ -34,76 +45,76 @@ Prior to installing PyPatchMatch, you need to take the following steps:
1. Install the `build-essential` tools:
```sh
sudo apt update
sudo apt install build-essential
```
```sh
sudo apt update
sudo apt install build-essential
```
2. Install `opencv`:
```sh
sudo apt install python3-opencv libopencv-dev
```
```sh
sudo apt install python3-opencv libopencv-dev
```
3. Activate the environment you use for invokeai, either with `conda` or with a
virtual environment.
4. Install pypatchmatch:
```sh
pip install pypatchmatch
```
```sh
pip install pypatchmatch
```
5. Confirm that pypatchmatch is installed. At the command-line prompt enter
`python`, and then at the `>>>` line type
`from patchmatch import patch_match`: It should look like the following:
```py
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from patchmatch import patch_match
Compiling and loading c extensions from "/home/lstein/Projects/InvokeAI/.invokeai-env/src/pypatchmatch/patchmatch".
rm -rf build/obj libpatchmatch.so
mkdir: created directory 'build/obj'
mkdir: created directory 'build/obj/csrc/'
[dep] csrc/masked_image.cpp ...
[dep] csrc/nnf.cpp ...
[dep] csrc/inpaint.cpp ...
[dep] csrc/pyinterface.cpp ...
[CC] csrc/pyinterface.cpp ...
[CC] csrc/inpaint.cpp ...
[CC] csrc/nnf.cpp ...
[CC] csrc/masked_image.cpp ...
[link] libpatchmatch.so ...
```
```py
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from patchmatch import patch_match
Compiling and loading c extensions from "/home/lstein/Projects/InvokeAI/.invokeai-env/src/pypatchmatch/patchmatch".
rm -rf build/obj libpatchmatch.so
mkdir: created directory 'build/obj'
mkdir: created directory 'build/obj/csrc/'
[dep] csrc/masked_image.cpp ...
[dep] csrc/nnf.cpp ...
[dep] csrc/inpaint.cpp ...
[dep] csrc/pyinterface.cpp ...
[CC] csrc/pyinterface.cpp ...
[CC] csrc/inpaint.cpp ...
[CC] csrc/nnf.cpp ...
[CC] csrc/masked_image.cpp ...
[link] libpatchmatch.so ...
```
### Arch Based Distros
1. Install the `base-devel` package:
```sh
sudo pacman -Syu
sudo pacman -S --needed base-devel
```
```sh
sudo pacman -Syu
sudo pacman -S --needed base-devel
```
2. Install `opencv` and `blas`:
```sh
sudo pacman -S opencv blas
```
```sh
sudo pacman -S opencv blas
```
or for CUDA support
or for CUDA support
```sh
sudo pacman -S opencv-cuda blas
```
```sh
sudo pacman -S opencv-cuda blas
```
3. Fix the naming of the `opencv` package configuration file:
```sh
cd /usr/lib/pkgconfig/
ln -sf opencv4.pc opencv.pc
```
```sh
cd /usr/lib/pkgconfig/
ln -sf opencv4.pc opencv.pc
```
[**Next, Follow Steps 4-6 from the Debian Section above**](#linux)

View File

@ -57,7 +57,7 @@ to footgun.
In v4.0.0, this object has been refactored to be much simpler.
See [INVOCATION_API](./INVOCATION_API.md) for full details of the API.
See the [invocation API docs](./invocation-api.md) for full details of the API.
!!! warning ""

View File

@ -1,30 +1,33 @@
# Invocation API
Each invocation's `invoke` method is provided a single arg - the Invocation
Context.
Each invocation's `invoke` method is provided a single arg - the Invocation Context.
This object provides access to various methods, used to interact with the
application. Loading and saving images, logging messages, etc.
This object provides an API the invocation can use to interact with application services, for example:
!!! warning ""
This API may shift slightly until the release of v4.0.0 as we work through a few final updates to the Model Manager.
- Saving images
- Logging messages
- Loading models
```py
class MyInvocation(BaseInvocation):
...
def invoke(self, context: InvocationContext) -> ImageOutput:
image_pil = context.images.get_pil(image_name)
# Load an image
image_pil = context.images.get_pil(self.image.image_name)
# Do something to the image
image_dto = context.images.save(image_pil)
output_image = do_something_cool(image_pil)
# Save the image
image_dto = context.images.save(output_image)
# Log a message
context.logger.info(f"Did something cool, image saved!")
# Return the output
return ImageOutput.build(image_dto)
...
```
The full API is documented below.
## Invocation Mixins
## Mixins
Two important mixins are provided to facilitate working with metadata and gallery boards.

View File

@ -1,6 +1,7 @@
# Nodes
## What are Nodes?
An Node is simply a single operation that takes in inputs and returns
out outputs. Multiple nodes can be linked together to create more
complex functionality. All InvokeAI features are added through nodes.
@ -13,14 +14,10 @@ Individual nodes are made up of the following:
- Outputs: Edge points on the right side of the node window where you connect to inputs on other nodes.
- Options: Various options which are either manually configured, or overridden by connecting an output from another node to the input.
With nodes, you can can easily extend the image generation capabilities of InvokeAI, and allow you build workflows that suit your needs.
With nodes, you can can easily extend the image generation capabilities of InvokeAI, and allow you build workflows that suit your needs.
You can read more about nodes and the node editor [here](../nodes/NODES.md).
To get started with nodes, take a look at some of our examples for [common workflows](../nodes/exampleWorkflows.md)
You can read more about nodes and the node editor [here](../nodes/NODES.md).
## Downloading New Nodes
To download a new node, visit our list of [Community Nodes](../nodes/communityNodes.md). These are nodes that have been created by the community, for the community.
To download a new node, visit our list of [Community Nodes](../nodes/communityNodes.md). These are nodes that have been created by the community, for the community.

View File

@ -84,12 +84,12 @@ plugins:
enable_creation_date: true
- redirects:
redirect_maps:
'installation/INSTALL_AUTOMATED.md': 'installation/010_INSTALL_AUTOMATED.md'
'installation/INSTALL_MANUAL.md': 'installation/020_INSTALL_MANUAL.md'
'installation/INSTALL_SOURCE.md': 'installation/020_INSTALL_MANUAL.md'
'installation/INSTALL_DOCKER.md': 'installation/040_INSTALL_DOCKER.md'
'installation/INSTALLING_MODELS.md': 'installation/050_INSTALLING_MODELS.md'
'installation/INSTALL_PATCHMATCH.md': 'installation/060_INSTALL_PATCHMATCH.md'
'installation/INSTALL_AUTOMATED.md': 'installation/installer.md'
'installation/INSTALL_MANUAL.md': 'installation/manual.md'
'installation/INSTALL_SOURCE.md': 'installation/manual.md'
'installation/INSTALL_DOCKER.md': 'installation/docker.md'
'installation/INSTALLING_MODELS.md': 'installation/models.md'
'installation/INSTALL_PATCHMATCH.md': 'installation/patchmatch.md'
- mkdocstrings:
handlers:
python:
@ -112,61 +112,41 @@ extra:
nav:
- Home: 'index.md'
- Installation:
- Overview: 'installation/INSTALLATION.md'
- Requirements: 'installation/INSTALL_REQUIREMENTS.md'
- Automatic Install: 'installation/010_INSTALL_AUTOMATED.md'
- Manual Install: 'installation/020_INSTALL_MANUAL.md'
- Developer Install: 'installation/INSTALL_DEVELOPMENT.md'
- Docker: 'installation/040_INSTALL_DOCKER.md'
- Installing Models: 'installation/050_INSTALLING_MODELS.md'
- Installing PyPatchMatch: 'installation/060_INSTALL_PATCHMATCH.md'
- Overview: 'installation/index.md'
- Requirements: 'installation/requirements.md'
- Installer: 'installation/installer.md'
- Manual Install: 'installation/manual.md'
- Docker: 'installation/docker.md'
- PatchMatch: 'installation/patchmatch.md'
- Models: 'installation/models.md'
- Workflows & Nodes:
- Nodes Overview: 'nodes/overview.md'
- Workflow Editor Basics: 'nodes/NODES.md'
- List of Default Nodes: 'nodes/defaultNodes.md'
- Example Workflows: 'nodes/exampleWorkflows.md'
- Community Nodes: 'nodes/communityNodes.md'
- ComfyUI to InvokeAI: 'nodes/comfyToInvoke.md'
- Facetool Node: 'nodes/detailedNodes/faceTools.md'
- Contributing Nodes: 'nodes/contributingNodes.md'
- Migrating from v3 to v4: 'nodes/NODES_MIGRATION_V3_V4.md'
- Invocation API: 'nodes/INVOCATION_API.md'
- Community Nodes: 'nodes/communityNodes.md'
- Invocation API: 'nodes/invocation-api.md'
- Configuration: 'configuration.md'
- Features:
- Overview: 'features/index.md'
- Database: 'features/database.md'
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
- Configuration: 'features/CONFIGURATION.md'
- Database: 'features/DATABASE.md'
- Control Adapters: 'features/CONTROLNET.md'
- Image-to-Image: 'features/IMG2IMG.md'
- Controlling Logging: 'features/LOGGING.md'
- LoRAs & LCM-LoRAs: 'features/LORAS.md'
- Model Merging: 'features/MODEL_MERGING.md'
- Workflows & Nodes: 'nodes/overview'
- NSFW Checker: 'features/WATERMARK+NSFW.md'
- Postprocessing: 'features/POSTPROCESS.md'
- Prompting Features: 'features/PROMPTS.md'
- Textual Inversions:
- Textual Inversions: 'features/TEXTUAL_INVERSIONS.md'
- Textual Inversion Training: 'features/TRAINING.md'
- Unified Canvas: 'features/UNIFIED_CANVAS.md'
- InvokeAI Web Server: 'features/WEB.md'
- WebUI Hotkeys: 'features/WEBUIHOTKEYS.md'
- Maintenance Utilities: 'features/UTILITIES.md'
- Other: 'features/OTHER.md'
- Contributing:
- How to Contribute: 'contributing/CONTRIBUTING.md'
- InvokeAI Code of Conduct: 'CODE_OF_CONDUCT.md'
- Overview: 'contributing/index.md'
- Code of Conduct: 'CODE_OF_CONDUCT.md'
- Dev Environment: 'contributing/dev-environment.md'
- Development:
- Overview: 'contributing/contribution_guides/development.md'
- New Contributors: 'contributing/contribution_guides/newContributorChecklist.md'
- Model Manager v2: 'contributing/MODEL_MANAGER.md'
- Frontend Documentation: 'contributing/contribution_guides/contributingToFrontend.md'
- Local Development: 'contributing/LOCAL_DEVELOPMENT.md'
- Testing: 'contributing/TESTS.md'
- Frontend:
- Overview: 'contributing/frontend/OVERVIEW.md'
- State Management: 'contributing/frontend/STATE_MGMT.md'
- Workflows: 'contributing/frontend/WORKFLOWS.md'
- Frontend:
- Overview: 'contributing/frontend/index.md'
- State Management: 'contributing/frontend/state-management.md'
- Workflows - Design and Implementation: 'contributing/frontend/workflows.md'
- Documentation: 'contributing/contribution_guides/documentation.md'
- Nodes: 'contributing/INVOCATIONS.md'
- Model Manager: 'contributing/MODEL_MANAGER.md'
@ -175,9 +155,6 @@ nav:
- Tutorials: 'contributing/contribution_guides/tutorials.md'
- Help:
- Getting Started: 'help/gettingStartedWithAI.md'
- FAQ: 'help/FAQ.md'
- Diffusion Overview: 'help/diffusion.md'
- Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md'
- Other:
- Contributors: 'other/CONTRIBUTORS.md'
- CompViz-README: 'other/README-CompViz.md'
- FAQ: 'faq.md'