From 2c0018d946a56a29604f827c1ea840d5434da93c Mon Sep 17 00:00:00 2001 From: MarlinMr <51798819+MarlinMr@users.noreply.github.com> Date: Fri, 7 Apr 2023 15:47:10 +0200 Subject: [PATCH] Cosmetic change of README.md (#878) --- README.md | 104 +++++++++++++++++++++++++++--------------------------- 1 file changed, 52 insertions(+), 52 deletions(-) diff --git a/README.md b/README.md index caae995c..9f017170 100644 --- a/README.md +++ b/README.md @@ -191,82 +191,82 @@ Optionally, you can use the following command-line flags: #### Basic settings -| Flag | Description | -|------------------|-------------| -| `-h`, `--help` | show this help message and exit | -| `--notebook` | Launch the web UI in notebook mode, where the output is written to the same text box as the input. | -| `--chat` | Launch the web UI in chat mode.| -| `--model MODEL` | Name of the model to load by default. | -| `--lora LORA` | Name of the LoRA to apply to the model by default. | -| `--model-dir MODEL_DIR` | Path to directory with all the models | -| `--lora-dir LORA_DIR` | Path to directory with all the loras | -| `--no-stream` | Don't stream the text output in real time. | -| `--settings SETTINGS_FILE` | Load the default interface settings from this json file. See `settings-template.json` for an example. If you create a file called `settings.json`, this file will be loaded by default without the need to use the `--settings` flag.| -| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. | -| `--verbose` | Print the prompts to the terminal. | +| Flag | Description | +|--------------------------------------------|-------------| +| `-h`, `--help` | Show this help message and exit. | +| `--notebook` | Launch the web UI in notebook mode, where the output is written to the same text box as the input. | +| `--chat` | Launch the web UI in chat mode. | +| `--model MODEL` | Name of the model to load by default. | +| `--lora LORA` | Name of the LoRA to apply to the model by default. | +| `--model-dir MODEL_DIR` | Path to directory with all the models. | +| `--lora-dir LORA_DIR` | Path to directory with all the loras. | +| `--no-stream` | Don't stream the text output in real time. | +| `--settings SETTINGS_FILE` | Load the default interface settings from this json file. See `settings-template.json` for an example. If you create a file called `settings.json`, this file will be loaded by default without the need to use the `--settings` flag. | +| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. | +| `--verbose` | Print the prompts to the terminal. | #### Accelerate/transformers -| Flag | Description | -|------------------|-------------| -| `--cpu` | Use the CPU to generate text.| -| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU.| -| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maxmimum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. | -| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.| -| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. | -| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. | -| `--load-in-8bit` | Load the model with 8-bit precision.| -| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. | -| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. | +| Flag | Description | +|---------------------------------------------|-------------| +| `--cpu` | Use the CPU to generate text. | +| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU. | +| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maxmimum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. | +| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.| +| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. | +| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. | +| `--load-in-8bit` | Load the model with 8-bit precision.| +| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. | +| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. | #### llama.cpp -| Flag | Description | -|------------------|-------------| -| `--threads` | Number of threads to use in llama.cpp. | +| Flag | Description | +|-------------|-------------| +| `--threads` | Number of threads to use in llama.cpp. | #### GPTQ -| Flag | Description | -|------------------|-------------| -| `--wbits WBITS` | GPTQ: Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. | -| `--model_type MODEL_TYPE` | GPTQ: Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. | -| `--groupsize GROUPSIZE` | GPTQ: Group size. | -| `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. | +| Flag | Description | +|---------------------------|-------------| +| `--wbits WBITS` | GPTQ: Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. | +| `--model_type MODEL_TYPE` | GPTQ: Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. | +| `--groupsize GROUPSIZE` | GPTQ: Group size. | +| `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. | #### FlexGen | Flag | Description | |------------------|-------------| -| `--flexgen` | Enable the use of FlexGen offloading. | -| `--percent PERCENT [PERCENT ...]` | FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0). | -| `--compress-weight` | FlexGen: Whether to compress weight (default: False).| -| `--pin-weight [PIN_WEIGHT]` | FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%). | +| `--flexgen` | Enable the use of FlexGen offloading. | +| `--percent PERCENT [PERCENT ...]` | FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0). | +| `--compress-weight` | FlexGen: Whether to compress weight (default: False).| +| `--pin-weight [PIN_WEIGHT]` | FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%). | #### DeepSpeed -| Flag | Description | -|------------------|-------------| -| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. | +| Flag | Description | +|---------------------------------------|-------------| +| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. | | `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. | -| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. | +| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. | #### RWKV -| Flag | Description | -|------------------|-------------| -| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". | -| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. | +| Flag | Description | +|---------------------------------|-------------| +| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". | +| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. | #### Gradio -| Flag | Description | -|------------------|-------------| -| `--listen` | Make the web UI reachable from your local network. | -| `--listen-port LISTEN_PORT` | The listening port that the server will use. | -| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. | -| `--auto-launch` | Open the web UI in the default browser upon launch. | -| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" | +| Flag | Description | +|---------------------------------------|-------------| +| `--listen` | Make the web UI reachable from your local network. | +| `--listen-port LISTEN_PORT` | The listening port that the server will use. | +| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. | +| `--auto-launch` | Open the web UI in the default browser upon launch. | +| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" | Out of memory errors? [Check the low VRAM guide](https://github.com/oobabooga/text-generation-webui/wiki/Low-VRAM-guide).