text-generation-webui/README.md

353 lines
19 KiB
Markdown
Raw Normal View History

2023-01-16 14:31:13 +01:00
# Text generation web UI
2022-12-21 20:49:30 +01:00
2023-04-09 07:18:42 +02:00
A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
2022-12-21 20:49:30 +01:00
Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) of text generation.
2022-12-21 17:17:06 +01:00
2023-01-22 21:28:51 +01:00
|![Image1](https://github.com/oobabooga/screenshots/raw/main/qa.png) | ![Image2](https://github.com/oobabooga/screenshots/raw/main/cai3.png) |
2023-01-15 03:25:05 +01:00
|:---:|:---:|
2023-01-15 08:27:22 +01:00
|![Image3](https://github.com/oobabooga/screenshots/raw/main/gpt4chan.png) | ![Image4](https://github.com/oobabooga/screenshots/raw/main/galactica.png) |
2022-12-21 18:04:51 +01:00
2023-01-07 22:40:51 +01:00
## Features
2023-06-05 20:16:06 +02:00
* 3 interface modes: default, notebook, and chat
2023-06-17 01:35:38 +02:00
* Multiple model backends: tranformers, llama.cpp, AutoGPTQ, GPTQ-for-LLaMa, ExLlama, RWKV, FlexGen
2023-06-05 20:16:06 +02:00
* Dropdown menu for quickly switching between different models
* LoRA: load and unload LoRAs on the fly, load multiple LoRAs at the same time, train a new LoRA
2023-06-05 22:04:09 +02:00
* Precise instruction templates for chat mode, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, MOSS, RWKV-Raven, Galactica, StableLM, WizardLM, Baize, Ziya, Chinese-Vicuna, MPT, INCITE, Wizard Mega, KoAlpaca, Vigogne, Bactrian, h2o, and OpenBuddy
2023-05-10 17:09:47 +02:00
* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal)
2023-06-05 20:16:06 +02:00
* 8-bit and 4-bit inference through bitsandbytes
* CPU mode for transformers models
* [DeepSpeed ZeRO-3 inference](docs/DeepSpeed.md)
2023-06-05 22:04:09 +02:00
* [Extensions](docs/Extensions.md)
2023-05-10 17:48:25 +02:00
* [Custom chat characters](docs/Chat-mode.md)
2023-04-01 19:15:19 +02:00
* Very efficient text streaming
2023-06-05 20:16:06 +02:00
* Markdown output with LaTeX rendering, to use for instance with [GALACTICA](https://github.com/paperswithcode/galai)
* Nice HTML output for GPT-4chan
* API, including endpoints for websocket streaming ([see the examples](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples))
To learn how to use the various features, check out the Documentation: https://github.com/oobabooga/text-generation-webui/tree/main/docs
2023-01-07 22:40:51 +01:00
## Installation
2022-12-21 17:17:06 +01:00
### One-click installers
| Windows | Linux | macOS | WSL |
|--------|--------|--------|--------|
| [oobabooga-windows.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_windows.zip) | [oobabooga-linux.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip) |[oobabooga-macos.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_macos.zip) | [oobabooga-wsl.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_wsl.zip) |
2023-05-11 14:54:22 +02:00
Just download the zip above, extract it, and double-click on "start". The web UI and all its dependencies will be installed in the same folder.
2023-04-18 18:07:08 +02:00
* The source codes are here: https://github.com/oobabooga/one-click-installers
* There is no need to run the installers as admin.
2023-05-11 14:54:22 +02:00
* AMD doesn't work on Windows.
* Huge thanks to [@jllllll](https://github.com/jllllll), [@ClayShoaf](https://github.com/ClayShoaf), and [@xNul](https://github.com/xNul) for their contributions to these installers.
### Manual installation using Conda
2023-05-11 14:54:22 +02:00
Recommended if you have some experience with the command line.
2023-04-05 19:42:58 +02:00
#### 0. Install Conda
2023-03-29 07:24:11 +02:00
2023-04-01 19:15:19 +02:00
https://docs.conda.io/en/latest/miniconda.html
2023-03-18 00:25:42 +01:00
On Linux or WSL, it can be automatically installed with these two commands:
2023-03-17 23:52:22 +01:00
```
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
bash Miniconda3.sh
```
Source: https://educe-ubc.github.io/conda.html
2023-04-05 19:42:58 +02:00
#### 1. Create a new conda environment
2023-01-13 05:29:36 +01:00
```
conda create -n textgen python=3.10.9
2023-01-13 05:29:36 +01:00
conda activate textgen
2023-01-20 04:20:35 +01:00
```
2023-04-05 19:42:58 +02:00
#### 2. Install Pytorch
| System | GPU | Command |
|--------|---------|---------|
2023-03-18 15:56:04 +01:00
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio` |
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2` |
2023-03-18 15:56:04 +01:00
| MacOS + MPS (untested) | Any | `pip3 install torch torchvision torchaudio` |
2023-06-01 16:27:33 +02:00
| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117` |
2023-02-11 11:56:12 +01:00
2023-05-11 14:54:22 +02:00
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
2023-03-18 02:57:46 +01:00
2023-04-05 19:42:58 +02:00
#### 2.1 Special instructions
2023-04-05 19:42:58 +02:00
* MacOS users: https://github.com/oobabooga/text-generation-webui/pull/393
* AMD users: https://rentry.org/eq3hg
2023-03-18 16:24:54 +01:00
2023-04-05 19:42:58 +02:00
#### 3. Install the web UI
2023-01-13 05:29:36 +01:00
```
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
2023-01-13 05:29:36 +01:00
```
2022-12-21 17:17:06 +01:00
2023-06-01 17:01:20 +02:00
#### llama.cpp with GPU acceleration
2023-06-01 17:04:58 +02:00
Requires the additional compilation step described here: [GPU acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration).
2023-06-01 17:01:20 +02:00
#### bitsandbytes
2023-05-25 16:21:52 +02:00
bitsandbytes >= 0.39 may not work on older NVIDIA GPUs. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
* Linux: `pip install bitsandbytes==0.38.1`
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
2023-03-17 23:59:37 +01:00
### Alternative: Docker
2023-04-07 03:50:58 +02:00
```
2023-04-18 17:56:37 +02:00
ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} .
2023-04-18 17:47:43 +02:00
cp docker/.env.example .env
# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
2023-04-07 15:48:47 +02:00
docker compose up --build
2023-04-07 03:50:58 +02:00
```
* You need to have docker compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Docker.md) for instructions.
2023-05-31 03:14:49 +02:00
* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
### Updating the requirements
From time to time, the `requirements.txt` changes. To update, use this command:
```
conda activate textgen
cd text-generation-webui
pip install -r requirements.txt --upgrade
```
2022-12-21 18:37:50 +01:00
## Downloading models
2023-05-11 14:49:57 +02:00
Models should be placed inside the `models/` folder.
2022-12-21 18:37:50 +01:00
2023-04-01 19:38:04 +02:00
[Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) is the main place to download models. These are some examples:
2022-12-21 18:37:50 +01:00
2023-04-01 19:49:03 +02:00
* [Pythia](https://huggingface.co/models?sort=downloads&search=eleutherai%2Fpythia+deduped)
2023-01-07 23:19:55 +01:00
* [OPT](https://huggingface.co/models?search=facebook/opt)
* [GALACTICA](https://huggingface.co/models?search=facebook/galactica)
2023-03-17 22:55:01 +01:00
* [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)
2023-01-06 04:13:26 +01:00
2023-02-14 01:49:19 +01:00
You can automatically download a model from HF using the script `download-model.py`:
2023-01-07 00:00:58 +01:00
python download-model.py organization/model
2023-04-01 19:15:19 +02:00
For example:
2023-01-07 00:00:58 +01:00
python download-model.py facebook/opt-1.3b
2023-06-05 20:16:06 +02:00
To download a protected model, set env vars `HF_USER` and `HF_PASS` to your Hugging Face username and password (or [User Access Token](https://huggingface.co/settings/tokens)). The model's terms must first be accepted on the HF website.
2023-01-23 14:02:35 +01:00
2023-05-11 14:49:57 +02:00
#### GGML models
You can drop these directly into the `models/` folder, making sure that the file name contains `ggml` somewhere and ends in `.bin`.
2023-04-01 19:15:19 +02:00
#### GPT-4chan
2022-12-21 18:37:50 +01:00
2023-06-05 20:16:06 +02:00
<details>
<summary>
2023-06-05 22:01:13 +02:00
Instructions
2023-06-05 20:16:06 +02:00
</summary>
2023-06-05 22:01:13 +02:00
2023-01-09 20:19:57 +01:00
[GPT-4chan](https://huggingface.co/ykilcher/gpt-4chan) has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
2022-12-21 18:37:50 +01:00
* Torrent: [16-bit](https://archive.org/details/gpt4chan_model_float16) / [32-bit](https://archive.org/details/gpt4chan_model)
* Direct download: [16-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/) / [32-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/)
2023-01-13 05:39:09 +01:00
The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version.
2023-01-13 05:37:12 +01:00
After downloading the model, follow these steps:
1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`.
2023-01-16 15:19:23 +01:00
2. Place GPT-J 6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json).
3. Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
2023-01-12 03:01:50 +01:00
2023-01-12 03:04:06 +01:00
```
python download-model.py EleutherAI/gpt-j-6B --text-only
2023-01-12 03:04:06 +01:00
```
2023-01-08 03:13:43 +01:00
2023-06-05 20:16:06 +02:00
When you load this model in default or notebook modes, the "HTML" tab will show the generated text in 4chan format.
</details>
2023-01-16 14:07:00 +01:00
## Starting the web UI
2022-12-21 17:17:06 +01:00
conda activate textgen
2023-03-18 01:05:17 +01:00
cd text-generation-webui
2022-12-21 17:17:06 +01:00
python server.py
2023-01-07 00:00:58 +01:00
Then browse to
`http://localhost:7860/?__theme=dark`
2023-01-07 00:07:58 +01:00
Optionally, you can use the following command-line flags:
2023-01-07 00:00:58 +01:00
2023-04-01 18:55:15 +02:00
#### Basic settings
2023-04-07 15:47:10 +02:00
| Flag | Description |
|--------------------------------------------|-------------|
| `-h`, `--help` | Show this help message and exit. |
| `--notebook` | Launch the web UI in notebook mode, where the output is written to the same text box as the input. |
| `--chat` | Launch the web UI in chat mode. |
| `--character CHARACTER` | The name of the character to load in chat mode by default. |
2023-04-07 15:47:10 +02:00
| `--model MODEL` | Name of the model to load by default. |
| `--lora LORA [LORA ...]` | The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces. |
2023-04-07 15:47:10 +02:00
| `--model-dir MODEL_DIR` | Path to directory with all the models. |
| `--lora-dir LORA_DIR` | Path to directory with all the loras. |
2023-04-13 02:24:26 +02:00
| `--model-menu` | Show a model menu in the terminal when the web UI is first launched. |
2023-04-07 15:47:10 +02:00
| `--no-stream` | Don't stream the text output in real time. |
2023-05-29 03:34:12 +02:00
| `--settings SETTINGS_FILE` | Load the default interface settings from this yaml file. See `settings-template.yaml` for an example. If you create a file called `settings.yaml`, this file will be loaded by default without the need to use the `--settings` flag. |
2023-04-07 15:47:10 +02:00
| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. |
| `--verbose` | Print the prompts to the terminal. |
2023-04-01 18:55:15 +02:00
#### Model loader
| Flag | Description |
|--------------------------------------------|-------------|
| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, flexgen |
2023-04-01 18:55:15 +02:00
#### Accelerate/transformers
2023-04-07 15:47:10 +02:00
| Flag | Description |
|---------------------------------------------|-------------|
| `--cpu` | Use the CPU to generate text. Warning: Training on CPU is extremely slow.|
2023-04-07 15:47:10 +02:00
| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU. |
2023-06-01 17:08:44 +02:00
| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maximum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. |
2023-04-07 15:47:10 +02:00
| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.|
| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. |
| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. |
2023-05-25 06:14:13 +02:00
| `--load-in-8bit` | Load the model with 8-bit precision (using bitsandbytes).|
2023-04-07 15:47:10 +02:00
| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. |
| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
2023-04-10 04:08:40 +02:00
| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
| `--sdp-attention` | Use torch 2.0's sdp attention. |
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. |
2023-04-01 18:55:15 +02:00
2023-05-25 06:14:13 +02:00
#### Accelerate 4-bit
⚠️ Requires minimum compute of 7.0 on Windows at the moment.
2023-05-25 06:23:46 +02:00
2023-05-25 06:14:13 +02:00
| Flag | Description |
|---------------------------------------------|-------------|
| `--load-in-4bit` | Load the model with 4-bit precision (using bitsandbytes). |
| `--compute_dtype COMPUTE_DTYPE` | compute dtype for 4-bit. Valid options: bfloat16, float16, float32. |
| `--quant_type QUANT_TYPE` | quant_type for 4-bit. Valid options: nf4, fp4. |
| `--use_double_quant` | use_double_quant for 4-bit. |
2023-04-01 18:55:15 +02:00
#### llama.cpp
2023-04-07 15:47:10 +02:00
| Flag | Description |
|-------------|-------------|
| `--threads` | Number of threads to use. |
| `--n_batch` | Maximum number of prompt tokens to batch together when calling llama_eval. |
| `--no-mmap` | Prevent mmap from being used. |
| `--mlock` | Force the system to keep the model in RAM. |
| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
| `--n_ctx N_CTX` | Size of the prompt context. |
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
2023-04-01 18:55:15 +02:00
#### AutoGPTQ
| Flag | Description |
|------------------|-------------|
| `--triton` | Use triton. |
| `--no_inject_fused_attention` | Disable the use of fused attention, which will use less VRAM at the cost of slower inference. |
| `--no_inject_fused_mlp` | Triton mode only: disable the use of fused MLP, which will use less VRAM at the cost of slower inference. |
| `--desc_act` | For models that don't have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig. |
#### ExLlama
| Flag | Description |
|------------------|-------------|
|`--gpu-split` | Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. `20,7,7` |
#### GPTQ-for-LLaMa
2023-04-01 18:55:15 +02:00
2023-04-07 15:47:10 +02:00
| Flag | Description |
|---------------------------|-------------|
2023-04-17 15:55:35 +02:00
| `--wbits WBITS` | Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. |
| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. |
| `--groupsize GROUPSIZE` | Group size. |
| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. |
2023-05-04 20:17:20 +02:00
| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. |
2023-04-17 15:55:35 +02:00
| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models.
2023-05-04 20:17:20 +02:00
| `--quant_attn` | (triton) Enable quant attention. |
| `--warmup_autotune` | (triton) Enable warmup autotune. |
| `--fused_mlp` | (triton) Enable fused mlp. |
2023-04-01 18:55:15 +02:00
#### FlexGen
| Flag | Description |
|------------------|-------------|
2023-04-07 15:47:10 +02:00
| `--percent PERCENT [PERCENT ...]` | FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0). |
| `--compress-weight` | FlexGen: Whether to compress weight (default: False).|
| `--pin-weight [PIN_WEIGHT]` | FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%). |
2023-04-01 18:55:15 +02:00
#### DeepSpeed
2023-04-07 15:47:10 +02:00
| Flag | Description |
|---------------------------------------|-------------|
| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. |
2023-03-17 15:58:45 +01:00
| `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. |
2023-04-07 15:47:10 +02:00
| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. |
2023-04-01 18:55:15 +02:00
#### RWKV
2023-04-07 15:47:10 +02:00
| Flag | Description |
|---------------------------------|-------------|
| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". |
| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. |
2023-04-01 18:55:15 +02:00
#### Gradio
2023-04-07 15:47:10 +02:00
| Flag | Description |
|---------------------------------------|-------------|
| `--listen` | Make the web UI reachable from your local network. |
| `--listen-host LISTEN_HOST` | The hostname that the server will use. |
2023-04-07 15:47:10 +02:00
| `--listen-port LISTEN_PORT` | The listening port that the server will use. |
| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. |
| `--auto-launch` | Open the web UI in the default browser upon launch. |
| `--gradio-auth USER:PWD` | set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3" |
2023-04-07 15:47:10 +02:00
| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" |
2023-01-09 14:58:46 +01:00
#### API
| Flag | Description |
|---------------------------------------|-------------|
| `--api` | Enable the API extension. |
| `--public-api` | Create a public URL for the API using Cloudfare. |
| `--api-blocking-port BLOCKING_PORT` | The listening port for the blocking API. |
| `--api-streaming-port STREAMING_PORT` | The listening port for the streaming API. |
#### Multimodal
| Flag | Description |
|---------------------------------------|-------------|
| `--multimodal-pipeline PIPELINE` | The multimodal pipeline to use. Examples: `llava-7b`, `llava-13b`. |
2023-04-22 07:46:07 +02:00
Out of memory errors? [Check the low VRAM guide](docs/Low-VRAM-guide.md).
2023-01-19 15:25:17 +01:00
2023-01-06 05:33:21 +01:00
## Presets
2023-06-01 16:27:33 +02:00
Inference settings presets can be created under `presets/` as yaml files. These files are detected automatically at startup.
2023-01-06 05:33:21 +01:00
The presets that are included by default are the result of a contest that received 7215 votes. More details can be found [here](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md).
2023-01-09 22:12:41 +01:00
2022-12-21 20:52:23 +01:00
## Contributing
2023-06-01 16:46:28 +02:00
* Pull requests, suggestions, and issue reports are welcome.
* Make sure to carefully [search](https://github.com/oobabooga/text-generation-webui/issues) existing issues before starting a new one.
* If you have some experience with git, testing an open pull request and leaving a comment on whether it works as expected or not is immensely helpful.
* A simple way to contribute, even if you are not a programmer, is to leave a 👍 on an issue or pull request that you find relevant.
2023-02-08 05:17:50 +01:00
2023-01-15 07:00:59 +01:00
## Credits
2023-01-09 20:15:54 +01:00
2023-03-16 16:45:27 +01:00
- Gradio dropdown menu refresh button, code for reloading the interface: https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Godlike preset: https://github.com/KoboldAI/KoboldAI-Client/wiki/Settings-Presets
- Code for early stopping in chat mode, code for some of the sliders: https://github.com/PygmalionAI/gradio-ui/