.github | ||
characters | ||
css | ||
extensions | ||
loras | ||
models | ||
modules | ||
presets | ||
softprompts | ||
.gitignore | ||
api-example-stream.py | ||
api-example.py | ||
convert-to-flexgen.py | ||
convert-to-safetensors.py | ||
download-model.py | ||
LICENSE | ||
README.md | ||
requirements.txt | ||
server.py | ||
settings-template.json |
Text generation web UI
A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of text generation.
Features
- Switch between different models using a dropdown menu.
- Notebook mode that resembles OpenAI's playground.
- Chat mode for conversation and role playing.
- Generate nice HTML output for GPT-4chan.
- Generate Markdown output for GALACTICA, including LaTeX support.
- Support for Pygmalion and custom characters in JSON or TavernAI Character Card formats (FAQ).
- Advanced chat features (send images, get audio responses with TTS).
- Stream the text output in real time very efficiently.
- Load parameter presets from text files.
- Load large models in 8-bit mode.
- Split large models across your GPU(s), CPU, and disk.
- CPU mode.
- FlexGen offload.
- DeepSpeed ZeRO-3 offload.
- Get responses via API, with or without streaming.
- LLaMA model, including 4-bit mode.
- RWKV model.
- Supports LoRAs.
- Supports softprompts.
- Supports extensions.
- Works on Google Colab.
Installation
The recommended installation methods are the following:
- Linux and MacOS: using conda natively.
- Windows: using conda on WSL (WSL installation guide).
Conda can be downloaded here: https://docs.conda.io/en/latest/miniconda.html
On Linux or WSL, it can be automatically installed with these two commands:
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
bash Miniconda3.sh
Source: https://educe-ubc.github.io/conda.html
1. Create a new conda environment
conda create -n textgen python=3.10.9
conda activate textgen
2. Install Pytorch
System | GPU | Command |
---|---|---|
Linux/WSL | NVIDIA | conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia |
Linux | AMD | pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2 |
MacOS + MPS (untested) | Any | conda install pytorch torchvision torchaudio -c pytorch |
The up to date commands can be found here: https://pytorch.org/get-started/locally/.
MacOS users, refer to the comments here: https://github.com/oobabooga/text-generation-webui/pull/393
3. Install the web UI
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
If you experience bitsandbytes issues on WSL while trying to use --load-in-8bit
, see this thread: https://github.com/microsoft/WSL/issues/5548#issuecomment-1292858815
Alternative: native Windows installation
As an alternative to the recommended WSL method, you can install the web UI natively on Windows using this guide. It will be a lot harder and the performance may be slower: Installation instructions for human beings.
Alternative: one-click installers
Just download the zip above, extract it, and double click on "install". The web UI and all its dependencies will be installed in the same folder.
- To download a model, double click on "download-model"
- To start the web UI, double click on "start-webui"
Source codes: https://github.com/oobabooga/one-click-installers
This method lags behind the newest developments and does not support 8-bit mode on Windows without additional set up: https://github.com/oobabooga/text-generation-webui/issues/147#issuecomment-1456040134, https://github.com/oobabooga/text-generation-webui/issues/20#issuecomment-1411650652
Alternative: Docker
https://github.com/oobabooga/text-generation-webui/issues/174, https://github.com/oobabooga/text-generation-webui/issues/87
Downloading models
Models should be placed inside the models
folder.
Hugging Face is the main place to download models. These are some noteworthy examples:
You can automatically download a model from HF using the script download-model.py
:
python download-model.py organization/model
For instance:
python download-model.py facebook/opt-1.3b
If you want to download a model manually, note that all you need are the json, txt, and pytorch*.bin (or model*.safetensors) files. The remaining files are not necessary.
GPT-4chan
GPT-4chan has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version.
After downloading the model, follow these steps:
- Place the files under
models/gpt4chan_model_float16
ormodels/gpt4chan_model
. - Place GPT-J 6B's config.json file in that same folder: config.json.
- Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
python download-model.py EleutherAI/gpt-j-6B --text-only
Starting the web UI
conda activate textgen
cd text-generation-webui
python server.py
Then browse to
http://localhost:7860/?__theme=dark
Optionally, you can use the following command-line flags:
Flag | Description |
---|---|
-h , --help |
show this help message and exit |
--model MODEL |
Name of the model to load by default. |
--lora LORA |
Name of the LoRA to apply to the model by default. |
--notebook |
Launch the web UI in notebook mode, where the output is written to the same text box as the input. |
--chat |
Launch the web UI in chat mode. |
--cai-chat |
Launch the web UI in chat mode with a style similar to Character.AI's. If the file img_bot.png or img_bot.jpg exists in the same folder as server.py, this image will be used as the bot's profile picture. Similarly, img_me.png or img_me.jpg will be used as your profile picture. |
--cpu |
Use the CPU to generate text. |
--load-in-8bit |
Load the model with 8-bit precision. |
--load-in-4bit |
DEPRECATED: use --gptq-bits 4 instead. |
--gptq-bits GPTQ_BITS |
Load a pre-quantized model with specified precision. 2, 3, 4 and 8 (bit) are supported. Currently only works with LLaMA and OPT. |
--gptq-model-type MODEL_TYPE |
Model type of pre-quantized model. Currently only LLaMa and OPT are supported. |
--bf16 |
Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. |
--auto-devices |
Automatically split the model across the available GPU(s) and CPU. |
--disk |
If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. |
--disk-cache-dir DISK_CACHE_DIR |
Directory to save the disk cache to. Defaults to cache/ . |
--gpu-memory GPU_MEMORY [GPU_MEMORY ...] |
Maxmimum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs. |
--cpu-memory CPU_MEMORY |
Maximum CPU memory in GiB to allocate for offloaded weights. Must be an integer number. Defaults to 99. |
--flexgen |
Enable the use of FlexGen offloading. |
--percent PERCENT [PERCENT ...] |
FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0). |
--compress-weight |
FlexGen: Whether to compress weight (default: False). |
--pin-weight [PIN_WEIGHT] |
FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%). |
--deepspeed |
Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. |
--nvme-offload-dir NVME_OFFLOAD_DIR |
DeepSpeed: Directory to use for ZeRO-3 NVME offloading. |
--local_rank LOCAL_RANK |
DeepSpeed: Optional argument for distributed setups. |
--rwkv-strategy RWKV_STRATEGY |
RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". |
--rwkv-cuda-on |
RWKV: Compile the CUDA kernel for better performance. |
--no-stream |
Don't stream the text output in real time. |
--settings SETTINGS_FILE |
Load the default interface settings from this json file. See settings-template.json for an example. If you create a file called settings.json , this file will be loaded by default without the need to use the --settings flag. |
--extensions EXTENSIONS [EXTENSIONS ...] |
The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. |
--listen |
Make the web UI reachable from your local network. |
--listen-port LISTEN_PORT |
The listening port that the server will use. |
--share |
Create a public URL. This is useful for running the web UI on Google Colab or similar. |
--auto-launch |
Open the web UI in the default browser upon launch. |
--verbose |
Print the prompts to the terminal. |
Out of memory errors? Check this guide.
Presets
Inference settings presets can be created under presets/
as text files. These files are detected automatically at startup.
By default, 10 presets by NovelAI and KoboldAI are included. These were selected out of a sample of 43 presets after applying a K-Means clustering algorithm and selecting the elements closest to the average of each cluster.
System requirements
Check the wiki for some examples of VRAM and RAM usage in both GPU and CPU mode.
Contributing
Pull requests, suggestions, and issue reports are welcome.
Before reporting a bug, make sure that you have:
- Created a conda environment and installed the dependencies exactly as in the Installation section above.
- Searched to see if an issue already exists for the issue you encountered.
Credits
- Gradio dropdown menu refresh button, code for reloading the interface: https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Verbose preset: Anonymous 4chan user.
- NovelAI and KoboldAI presets: https://github.com/KoboldAI/KoboldAI-Client/wiki/Settings-Presets
- Pygmalion preset, code for early stopping in chat mode, code for some of the sliders, --chat mode colors: https://github.com/PygmalionAI/gradio-ui/