mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 16:17:57 +01:00
Update README.md
This commit is contained in:
parent
e52b43c934
commit
df18ae7d6c
12
README.md
12
README.md
@ -25,7 +25,6 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.
|
|||||||
* [4-bit GPTQ mode](docs/GPTQ-models-(4-bit-mode).md)
|
* [4-bit GPTQ mode](docs/GPTQ-models-(4-bit-mode).md)
|
||||||
* [LoRA (loading and training)](docs/Using-LoRAs.md)
|
* [LoRA (loading and training)](docs/Using-LoRAs.md)
|
||||||
* [llama.cpp](docs/llama.cpp-models.md)
|
* [llama.cpp](docs/llama.cpp-models.md)
|
||||||
* [RWKV model](docs/RWKV-model.md)
|
|
||||||
* 8-bit and 4-bit through bitsandbytes
|
* 8-bit and 4-bit through bitsandbytes
|
||||||
* Layers splitting across GPU(s), CPU, and disk
|
* Layers splitting across GPU(s), CPU, and disk
|
||||||
* CPU mode
|
* CPU mode
|
||||||
@ -53,8 +52,6 @@ Just download the zip above, extract it, and double-click on "start". The web UI
|
|||||||
|
|
||||||
Recommended if you have some experience with the command line.
|
Recommended if you have some experience with the command line.
|
||||||
|
|
||||||
On Windows, I additionally recommend carrying out the installation on WSL instead of the base system: [WSL installation guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/WSL-installation-guide.md).
|
|
||||||
|
|
||||||
#### 0. Install Conda
|
#### 0. Install Conda
|
||||||
|
|
||||||
https://docs.conda.io/en/latest/miniconda.html
|
https://docs.conda.io/en/latest/miniconda.html
|
||||||
@ -81,6 +78,7 @@ conda activate textgen
|
|||||||
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio` |
|
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio` |
|
||||||
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2` |
|
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2` |
|
||||||
| MacOS + MPS (untested) | Any | `pip3 install torch torchvision torchaudio` |
|
| MacOS + MPS (untested) | Any | `pip3 install torch torchvision torchaudio` |
|
||||||
|
| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117` |
|
||||||
|
|
||||||
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
|
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
|
||||||
|
|
||||||
@ -112,10 +110,6 @@ bitsandbytes >= 0.39 may not work on older NVIDIA GPUs. In that case, to use `--
|
|||||||
* Linux: `pip install bitsandbytes==0.38.1`
|
* Linux: `pip install bitsandbytes==0.38.1`
|
||||||
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
|
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
|
||||||
|
|
||||||
### Alternative: manual Windows installation
|
|
||||||
|
|
||||||
As an alternative to the recommended WSL method, you can install the web UI natively on Windows using this guide. It will be a lot harder and the performance may be slower: [Windows installation guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Windows-installation-guide.md).
|
|
||||||
|
|
||||||
### Alternative: Docker
|
### Alternative: Docker
|
||||||
|
|
||||||
```
|
```
|
||||||
@ -158,7 +152,7 @@ For example:
|
|||||||
|
|
||||||
* If you want to download a model manually, note that all you need are the json, txt, and pytorch\*.bin (or model*.safetensors) files. The remaining files are not necessary.
|
* If you want to download a model manually, note that all you need are the json, txt, and pytorch\*.bin (or model*.safetensors) files. The remaining files are not necessary.
|
||||||
|
|
||||||
* If you want to download a protected model (one gated behind accepting a license or otherwise private, like `bigcode/starcoder`) you can set the environment variables `HF_USER` to your huggingface username and `HF_PASS` to your password or (_as a better option_) to a [User Access Token](https://huggingface.co/settings/tokens). Note that you will need to accept the model terms on the Hugging Face website before starting the download.
|
* If you want to download a protected model (one gated behind accepting a license or otherwise private, like `bigcode/starcoder`) you can set the environment variables `HF_USER` to your huggingface username and `HF_PASS` to your password -- or, as a better option, to a [User Access Token](https://huggingface.co/settings/tokens). Note that you will need to accept the model terms on the Hugging Face website before starting the download.
|
||||||
|
|
||||||
#### GGML models
|
#### GGML models
|
||||||
|
|
||||||
@ -330,7 +324,7 @@ Out of memory errors? [Check the low VRAM guide](docs/Low-VRAM-guide.md).
|
|||||||
|
|
||||||
## Presets
|
## Presets
|
||||||
|
|
||||||
Inference settings presets can be created under `presets/` as text files. These files are detected automatically at startup.
|
Inference settings presets can be created under `presets/` as yaml files. These files are detected automatically at startup.
|
||||||
|
|
||||||
By default, 10 presets based on NovelAI and KoboldAI presets are included. These were selected out of a sample of 43 presets after applying a K-Means clustering algorithm and selecting the elements closest to the average of each cluster.
|
By default, 10 presets based on NovelAI and KoboldAI presets are included. These were selected out of a sample of 43 presets after applying a K-Means clustering algorithm and selecting the elements closest to the average of each cluster.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user