mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-12-25 22:08:53 +01:00
Update README
This commit is contained in:
parent
0d5ca05ab9
commit
634518a412
350
README.md
350
README.md
@ -32,9 +32,183 @@ To launch the web UI again in the future, run the same `start_` script that you
|
||||
|
||||
<details>
|
||||
<summary>
|
||||
Setup details
|
||||
Setup details and information about installing manually
|
||||
</summary>
|
||||
|
||||
### One-click-installer
|
||||
|
||||
#### How it works
|
||||
|
||||
The script creates a folder called `installer_files` where it sets up a Conda environment using Miniconda. The installation is self-contained: if you want to reinstall, just delete `installer_files` and run the start script again.
|
||||
|
||||
To launch the webui in the future after it is already installed, run the same `start` script.
|
||||
|
||||
#### Getting updates
|
||||
|
||||
Run `update_linux.sh`, `update_windows.bat`, `update_macos.sh`, or `update_wsl.bat`.
|
||||
|
||||
#### Running commands
|
||||
|
||||
If you ever need to install something manually in the `installer_files` environment, you can launch an interactive shell using the cmd script: `cmd_linux.sh`, `cmd_windows.bat`, `cmd_macos.sh`, or `cmd_wsl.bat`.
|
||||
|
||||
#### Defining command-line flags
|
||||
|
||||
To define persistent command-line flags like `--listen` or `--api`, edit the `CMD_FLAGS.txt` file with a text editor and add them there. Flags can also be provided directly to the start scripts, for instance, `./start-linux.sh --listen`.
|
||||
|
||||
#### Other info
|
||||
|
||||
* There is no need to run any of those scripts as admin/root.
|
||||
* For additional instructions about AMD setup and WSL setup, consult [the documentation](https://github.com/oobabooga/text-generation-webui/wiki).
|
||||
* The installer has been tested mostly on NVIDIA GPUs. If you can find a way to improve it for your AMD/Intel Arc/Mac Metal GPU, you are highly encouraged to submit a PR to this repository. The main file to be edited is `one_click.py`.
|
||||
* For automated installation, you can use the `GPU_CHOICE`, `USE_CUDA118`, `LAUNCH_AFTER_INSTALL`, and `INSTALL_EXTENSIONS` environment variables. For instance: `GPU_CHOICE=A USE_CUDA118=FALSE LAUNCH_AFTER_INSTALL=FALSE INSTALL_EXTENSIONS=FALSE ./start_linux.sh`.
|
||||
|
||||
### Manual installation using Conda
|
||||
|
||||
Recommended if you have some experience with the command-line.
|
||||
|
||||
#### 0. Install Conda
|
||||
|
||||
https://docs.conda.io/en/latest/miniconda.html
|
||||
|
||||
On Linux or WSL, it can be automatically installed with these two commands ([source](https://educe-ubc.github.io/conda.html)):
|
||||
|
||||
```
|
||||
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
|
||||
bash Miniconda3.sh
|
||||
```
|
||||
|
||||
#### 1. Create a new conda environment
|
||||
|
||||
```
|
||||
conda create -n textgen python=3.11
|
||||
conda activate textgen
|
||||
```
|
||||
|
||||
#### 2. Install Pytorch
|
||||
|
||||
| System | GPU | Command |
|
||||
|--------|---------|---------|
|
||||
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121` |
|
||||
| Linux/WSL | CPU only | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu` |
|
||||
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6` |
|
||||
| MacOS + MPS | Any | `pip3 install torch torchvision torchaudio` |
|
||||
| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121` |
|
||||
| Windows | CPU only | `pip3 install torch torchvision torchaudio` |
|
||||
|
||||
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
|
||||
|
||||
For NVIDIA, you also need to install the CUDA runtime libraries:
|
||||
|
||||
```
|
||||
conda install -y -c "nvidia/label/cuda-12.1.1" cuda-runtime
|
||||
```
|
||||
|
||||
If you need `nvcc` to compile some library manually, replace the command above with
|
||||
|
||||
```
|
||||
conda install -y -c "nvidia/label/cuda-12.1.1" cuda
|
||||
```
|
||||
|
||||
#### 3. Install the web UI
|
||||
|
||||
```
|
||||
git clone https://github.com/oobabooga/text-generation-webui
|
||||
cd text-generation-webui
|
||||
pip install -r <requirements file according to table below>
|
||||
```
|
||||
|
||||
Requirements file to use:
|
||||
|
||||
| GPU | CPU | requirements file to use |
|
||||
|--------|---------|---------|
|
||||
| NVIDIA | has AVX2 | `requirements.txt` |
|
||||
| NVIDIA | no AVX2 | `requirements_noavx2.txt` |
|
||||
| AMD | has AVX2 | `requirements_amd.txt` |
|
||||
| AMD | no AVX2 | `requirements_amd_noavx2.txt` |
|
||||
| CPU only | has AVX2 | `requirements_cpu_only.txt` |
|
||||
| CPU only | no AVX2 | `requirements_cpu_only_noavx2.txt` |
|
||||
| Apple | Intel | `requirements_apple_intel.txt` |
|
||||
| Apple | Apple Silicon | `requirements_apple_silicon.txt` |
|
||||
|
||||
### Start the web UI
|
||||
|
||||
conda activate textgen
|
||||
cd text-generation-webui
|
||||
python server.py
|
||||
|
||||
Then browse to
|
||||
|
||||
`http://localhost:7860/?__theme=dark`
|
||||
|
||||
##### AMD GPU on Windows
|
||||
|
||||
1) Use `requirements_cpu_only.txt` or `requirements_cpu_only_noavx2.txt` in the command above.
|
||||
|
||||
2) Manually install llama-cpp-python using the appropriate command for your hardware: [Installation from PyPI](https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration).
|
||||
* Use the `LLAMA_HIPBLAS=on` toggle.
|
||||
* Note the [Windows remarks](https://github.com/abetlen/llama-cpp-python#windows-remarks).
|
||||
|
||||
3) Manually install AutoGPTQ: [Installation](https://github.com/PanQiWei/AutoGPTQ#install-from-source).
|
||||
* Perform the from-source installation - there are no prebuilt ROCm packages for Windows.
|
||||
|
||||
4) Manually install [ExLlama](https://github.com/turboderp/exllama) by simply cloning it into the `repositories` folder (it will be automatically compiled at runtime after that):
|
||||
|
||||
```sh
|
||||
cd text-generation-webui
|
||||
git clone https://github.com/turboderp/exllama repositories/exllama
|
||||
```
|
||||
|
||||
##### Older NVIDIA GPUs
|
||||
|
||||
1) For Kepler GPUs and older, you will need to install CUDA 11.8 instead of 12:
|
||||
|
||||
```
|
||||
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
||||
conda install -y -c "nvidia/label/cuda-11.8.0" cuda-runtime
|
||||
```
|
||||
|
||||
2) bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
||||
* Linux: `pip install bitsandbytes==0.38.1`
|
||||
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
|
||||
|
||||
##### Manual install
|
||||
|
||||
The requirements*.txt above contain various precompiled wheels. If you wish to compile things manually, or if you need to because no suitable wheels are available for your hardware, you can use `requirements_nowheels.txt` and then install your desired loaders manually.
|
||||
|
||||
### Alternative: Docker
|
||||
|
||||
```
|
||||
ln -s docker/{nvidia/Dockerfile,docker-compose.yml,.dockerignore} .
|
||||
cp docker/.env.example .env
|
||||
# Edit .env and set:
|
||||
# TORCH_CUDA_ARCH_LIST based on your GPU model
|
||||
# APP_RUNTIME_GID your host user's group id (run `id -g` in a terminal)
|
||||
# BUILD_EXTENIONS optionally add comma separated list of extensions to build
|
||||
docker compose up --build
|
||||
```
|
||||
|
||||
* You need to have Docker Compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/wiki/09-%E2%80%90-Docker) for instructions.
|
||||
* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
|
||||
|
||||
### Updating the requirements
|
||||
|
||||
From time to time, the `requirements*.txt` changes. To update, use these commands:
|
||||
|
||||
```
|
||||
conda activate textgen
|
||||
cd text-generation-webui
|
||||
pip install -r <requirements file that you have used> --upgrade
|
||||
```
|
||||
</details>
|
||||
|
||||
Command-line flags can be passed to the `start_` script. Alternatively, you can open the file `CMD_FLAGS.txt` with a text editor and add your flags there.
|
||||
|
||||
<details>
|
||||
<summary>
|
||||
Command-line flags list
|
||||
</summary>
|
||||
|
||||
|
||||
#### Basic settings
|
||||
|
||||
| Flag | Description |
|
||||
@ -199,180 +373,6 @@ Setup details
|
||||
| `--multimodal-pipeline PIPELINE` | The multimodal pipeline to use. Examples: `llava-7b`, `llava-13b`. |
|
||||
|
||||
|
||||
</details>
|
||||
|
||||
Command-line flags can be passed to that script. Alternatively, you can place your flags in the `CMD_FLAGS.txt` file.
|
||||
|
||||
<details>
|
||||
<summary>
|
||||
Command-line flags list
|
||||
</summary>
|
||||
|
||||
### One-click-installer
|
||||
|
||||
#### How it works
|
||||
|
||||
The script creates a folder called `installer_files` where it sets up a Conda environment using Miniconda. The installation is self-contained: if you want to reinstall, just delete `installer_files` and run the start script again.
|
||||
|
||||
To launch the webui in the future after it is already installed, run the same `start` script.
|
||||
|
||||
#### Getting updates
|
||||
|
||||
Run `update_linux.sh`, `update_windows.bat`, `update_macos.sh`, or `update_wsl.bat`.
|
||||
|
||||
#### Running commands
|
||||
|
||||
If you ever need to install something manually in the `installer_files` environment, you can launch an interactive shell using the cmd script: `cmd_linux.sh`, `cmd_windows.bat`, `cmd_macos.sh`, or `cmd_wsl.bat`.
|
||||
|
||||
#### Defining command-line flags
|
||||
|
||||
To define persistent command-line flags like `--listen` or `--api`, edit the `CMD_FLAGS.txt` file with a text editor and add them there. Flags can also be provided directly to the start scripts, for instance, `./start-linux.sh --listen`.
|
||||
|
||||
#### Other info
|
||||
|
||||
* There is no need to run any of those scripts as admin/root.
|
||||
* For additional instructions about AMD setup and WSL setup, consult [the documentation](https://github.com/oobabooga/text-generation-webui/wiki).
|
||||
* The installer has been tested mostly on NVIDIA GPUs. If you can find a way to improve it for your AMD/Intel Arc/Mac Metal GPU, you are highly encouraged to submit a PR to this repository. The main file to be edited is `one_click.py`.
|
||||
* For automated installation, you can use the `GPU_CHOICE`, `USE_CUDA118`, `LAUNCH_AFTER_INSTALL`, and `INSTALL_EXTENSIONS` environment variables. For instance: `GPU_CHOICE=A USE_CUDA118=FALSE LAUNCH_AFTER_INSTALL=FALSE INSTALL_EXTENSIONS=FALSE ./start_linux.sh`.
|
||||
|
||||
### Manual installation using Conda
|
||||
|
||||
Recommended if you have some experience with the command-line.
|
||||
|
||||
#### 0. Install Conda
|
||||
|
||||
https://docs.conda.io/en/latest/miniconda.html
|
||||
|
||||
On Linux or WSL, it can be automatically installed with these two commands ([source](https://educe-ubc.github.io/conda.html)):
|
||||
|
||||
```
|
||||
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
|
||||
bash Miniconda3.sh
|
||||
```
|
||||
|
||||
#### 1. Create a new conda environment
|
||||
|
||||
```
|
||||
conda create -n textgen python=3.11
|
||||
conda activate textgen
|
||||
```
|
||||
|
||||
#### 2. Install Pytorch
|
||||
|
||||
| System | GPU | Command |
|
||||
|--------|---------|---------|
|
||||
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121` |
|
||||
| Linux/WSL | CPU only | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu` |
|
||||
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6` |
|
||||
| MacOS + MPS | Any | `pip3 install torch torchvision torchaudio` |
|
||||
| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121` |
|
||||
| Windows | CPU only | `pip3 install torch torchvision torchaudio` |
|
||||
|
||||
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
|
||||
|
||||
For NVIDIA, you also need to install the CUDA runtime libraries:
|
||||
|
||||
```
|
||||
conda install -y -c "nvidia/label/cuda-12.1.1" cuda-runtime
|
||||
```
|
||||
|
||||
If you need `nvcc` to compile some library manually, replace the command above with
|
||||
|
||||
```
|
||||
conda install -y -c "nvidia/label/cuda-12.1.1" cuda
|
||||
```
|
||||
|
||||
#### 3. Install the web UI
|
||||
|
||||
```
|
||||
git clone https://github.com/oobabooga/text-generation-webui
|
||||
cd text-generation-webui
|
||||
pip install -r <requirements file according to table below>
|
||||
```
|
||||
|
||||
Requirements file to use:
|
||||
|
||||
| GPU | CPU | requirements file to use |
|
||||
|--------|---------|---------|
|
||||
| NVIDIA | has AVX2 | `requirements.txt` |
|
||||
| NVIDIA | no AVX2 | `requirements_noavx2.txt` |
|
||||
| AMD | has AVX2 | `requirements_amd.txt` |
|
||||
| AMD | no AVX2 | `requirements_amd_noavx2.txt` |
|
||||
| CPU only | has AVX2 | `requirements_cpu_only.txt` |
|
||||
| CPU only | no AVX2 | `requirements_cpu_only_noavx2.txt` |
|
||||
| Apple | Intel | `requirements_apple_intel.txt` |
|
||||
| Apple | Apple Silicon | `requirements_apple_silicon.txt` |
|
||||
|
||||
### Start the web UI
|
||||
|
||||
conda activate textgen
|
||||
cd text-generation-webui
|
||||
python server.py
|
||||
|
||||
Then browse to
|
||||
|
||||
`http://localhost:7860/?__theme=dark`
|
||||
|
||||
##### AMD GPU on Windows
|
||||
|
||||
1) Use `requirements_cpu_only.txt` or `requirements_cpu_only_noavx2.txt` in the command above.
|
||||
|
||||
2) Manually install llama-cpp-python using the appropriate command for your hardware: [Installation from PyPI](https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration).
|
||||
* Use the `LLAMA_HIPBLAS=on` toggle.
|
||||
* Note the [Windows remarks](https://github.com/abetlen/llama-cpp-python#windows-remarks).
|
||||
|
||||
3) Manually install AutoGPTQ: [Installation](https://github.com/PanQiWei/AutoGPTQ#install-from-source).
|
||||
* Perform the from-source installation - there are no prebuilt ROCm packages for Windows.
|
||||
|
||||
4) Manually install [ExLlama](https://github.com/turboderp/exllama) by simply cloning it into the `repositories` folder (it will be automatically compiled at runtime after that):
|
||||
|
||||
```sh
|
||||
cd text-generation-webui
|
||||
git clone https://github.com/turboderp/exllama repositories/exllama
|
||||
```
|
||||
|
||||
##### Older NVIDIA GPUs
|
||||
|
||||
1) For Kepler GPUs and older, you will need to install CUDA 11.8 instead of 12:
|
||||
|
||||
```
|
||||
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
||||
conda install -y -c "nvidia/label/cuda-11.8.0" cuda-runtime
|
||||
```
|
||||
|
||||
2) bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
||||
* Linux: `pip install bitsandbytes==0.38.1`
|
||||
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
|
||||
|
||||
##### Manual install
|
||||
|
||||
The requirements*.txt above contain various precompiled wheels. If you wish to compile things manually, or if you need to because no suitable wheels are available for your hardware, you can use `requirements_nowheels.txt` and then install your desired loaders manually.
|
||||
|
||||
### Alternative: Docker
|
||||
|
||||
```
|
||||
ln -s docker/{nvidia/Dockerfile,docker-compose.yml,.dockerignore} .
|
||||
cp docker/.env.example .env
|
||||
# Edit .env and set:
|
||||
# TORCH_CUDA_ARCH_LIST based on your GPU model
|
||||
# APP_RUNTIME_GID your host user's group id (run `id -g` in a terminal)
|
||||
# BUILD_EXTENIONS optionally add comma separated list of extensions to build
|
||||
docker compose up --build
|
||||
```
|
||||
|
||||
* You need to have Docker Compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/wiki/09-%E2%80%90-Docker) for instructions.
|
||||
* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
|
||||
|
||||
### Updating the requirements
|
||||
|
||||
From time to time, the `requirements*.txt` changes. To update, use these commands:
|
||||
|
||||
```
|
||||
conda activate textgen
|
||||
cd text-generation-webui
|
||||
pip install -r <requirements file that you've used> --upgrade
|
||||
```
|
||||
|
||||
|
||||
</details>
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user