mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-25 17:29:22 +01:00
Update installation documentation
This commit is contained in:
parent
d7ee4c2386
commit
d6765bebc4
@ -70,53 +70,13 @@ Not supported yet.
|
|||||||
|
|
||||||
GPTQ-for-LLaMa is the original adaptation of GPTQ for the LLaMA model. It was made possible by [@qwopqwop200](https://github.com/qwopqwop200/GPTQ-for-LLaMa): https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
GPTQ-for-LLaMa is the original adaptation of GPTQ for the LLaMA model. It was made possible by [@qwopqwop200](https://github.com/qwopqwop200/GPTQ-for-LLaMa): https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
||||||
|
|
||||||
Different branches of GPTQ-for-LLaMa are currently available, including:
|
A Python package containing both major CUDA versions of GPTQ-for-LLaMa is used to simplify installation and compatibility: https://github.com/jllllll/GPTQ-for-LLaMa-CUDA
|
||||||
|
|
||||||
| Branch | Comment |
|
|
||||||
|----|----|
|
|
||||||
| [Old CUDA branch (recommended)](https://github.com/oobabooga/GPTQ-for-LLaMa/) | The fastest branch, works on Windows and Linux. |
|
|
||||||
| [Up-to-date triton branch](https://github.com/qwopqwop200/GPTQ-for-LLaMa) | Slightly more precise than the old CUDA branch from 13b upwards, significantly more precise for 7b. 2x slower for small context size and only works on Linux. |
|
|
||||||
| [Up-to-date CUDA branch](https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/cuda) | As precise as the up-to-date triton branch, 10x slower than the old cuda branch for small context size. |
|
|
||||||
|
|
||||||
Overall, I recommend using the old CUDA branch. It is included by default in the one-click-installer for this web UI.
|
|
||||||
|
|
||||||
### Installation
|
|
||||||
|
|
||||||
Start by cloning GPTQ-for-LLaMa into your `text-generation-webui/repositories` folder:
|
|
||||||
|
|
||||||
```
|
|
||||||
mkdir repositories
|
|
||||||
cd repositories
|
|
||||||
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
|
|
||||||
```
|
|
||||||
|
|
||||||
If you want to you to use the up-to-date CUDA or triton branches instead of the old CUDA branch, use these commands:
|
|
||||||
|
|
||||||
```
|
|
||||||
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b cuda
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b triton
|
|
||||||
```
|
|
||||||
|
|
||||||
Next you need to install the CUDA extensions. You can do that either by installing the precompiled wheels, or by compiling the wheels yourself.
|
|
||||||
|
|
||||||
### Precompiled wheels
|
### Precompiled wheels
|
||||||
|
|
||||||
Kindly provided by our friend jllllll: https://github.com/jllllll/GPTQ-for-LLaMa-Wheels
|
Kindly provided by our friend jllllll: https://github.com/jllllll/GPTQ-for-LLaMa-CUDA/releases
|
||||||
|
|
||||||
Windows:
|
Wheels are included in requirements.txt and are installed with the webui on supported systems.
|
||||||
|
|
||||||
```
|
|
||||||
pip install https://github.com/jllllll/GPTQ-for-LLaMa-Wheels/raw/main/quant_cuda-0.0.0-cp310-cp310-win_amd64.whl
|
|
||||||
```
|
|
||||||
|
|
||||||
Linux:
|
|
||||||
|
|
||||||
```
|
|
||||||
pip install https://github.com/jllllll/GPTQ-for-LLaMa-Wheels/raw/Linux-x64/quant_cuda-0.0.0-cp310-cp310-linux_x86_64.whl
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manual installation
|
### Manual installation
|
||||||
|
|
||||||
@ -124,20 +84,19 @@ pip install https://github.com/jllllll/GPTQ-for-LLaMa-Wheels/raw/Linux-x64/quant
|
|||||||
|
|
||||||
```
|
```
|
||||||
conda activate textgen
|
conda activate textgen
|
||||||
conda install -c conda-forge cudatoolkit-dev
|
conda install cuda -c nvidia/label/cuda-11.7.1
|
||||||
```
|
```
|
||||||
|
|
||||||
The command above takes some 10 minutes to run and shows no progress bar or updates along the way.
|
The command above takes some 10 minutes to run and shows no progress bar or updates along the way.
|
||||||
|
|
||||||
You are also going to need to have a C++ compiler installed. On Linux, `sudo apt install build-essential` or equivalent is enough.
|
You are also going to need to have a C++ compiler installed. On Linux, `sudo apt install build-essential` or equivalent is enough. On Windows, Visual Studio or Visual Studio Build Tools is required.
|
||||||
|
|
||||||
If you're using an older version of CUDA toolkit (e.g. 11.7) but the latest version of `gcc` and `g++` (12.0+), you should downgrade with: `conda install -c conda-forge gxx==11.3.0`. Kernel compilation will fail otherwise.
|
If you're using an older version of CUDA toolkit (e.g. 11.7) but the latest version of `gcc` and `g++` (12.0+) on Linux, you should downgrade with: `conda install -c conda-forge gxx==11.3.0`. Kernel compilation will fail otherwise.
|
||||||
|
|
||||||
#### Step 2: compile the CUDA extensions
|
#### Step 2: compile the CUDA extensions
|
||||||
|
|
||||||
```
|
```
|
||||||
cd repositories/GPTQ-for-LLaMa
|
python -m pip install git+https://github.com/jllllll/GPTQ-for-LLaMa-CUDA -v
|
||||||
python setup_cuda.py install
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Getting pre-converted LLaMA weights
|
### Getting pre-converted LLaMA weights
|
||||||
|
@ -110,7 +110,7 @@ def create_ui():
|
|||||||
shared.gradio['mlock'] = gr.Checkbox(label="mlock", value=shared.args.mlock)
|
shared.gradio['mlock'] = gr.Checkbox(label="mlock", value=shared.args.mlock)
|
||||||
shared.gradio['llama_cpp_seed'] = gr.Number(label='Seed (0 for random)', value=shared.args.llama_cpp_seed)
|
shared.gradio['llama_cpp_seed'] = gr.Number(label='Seed (0 for random)', value=shared.args.llama_cpp_seed)
|
||||||
shared.gradio['trust_remote_code'] = gr.Checkbox(label="trust-remote-code", value=shared.args.trust_remote_code, info='Make sure to inspect the .py files inside the model folder before loading it with this option enabled.')
|
shared.gradio['trust_remote_code'] = gr.Checkbox(label="trust-remote-code", value=shared.args.trust_remote_code, info='Make sure to inspect the .py files inside the model folder before loading it with this option enabled.')
|
||||||
shared.gradio['gptq_for_llama_info'] = gr.Markdown('GPTQ-for-LLaMa support is currently only kept for compatibility with older GPUs. AutoGPTQ or ExLlama is preferred when compatible. GPTQ-for-LLaMa is installed by default with the one-click installers. Otherwise, it has to be installed manually following the instructions here: [instructions](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#installation-1).')
|
shared.gradio['gptq_for_llama_info'] = gr.Markdown('GPTQ-for-LLaMa support is currently only kept for compatibility with older GPUs. AutoGPTQ or ExLlama is preferred when compatible. GPTQ-for-LLaMa is installed by default with the webui on supported systems. Otherwise, it has to be installed manually following the instructions here: [instructions](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#installation-1).')
|
||||||
shared.gradio['exllama_info'] = gr.Markdown('For more information, consult the [docs](https://github.com/oobabooga/text-generation-webui/blob/main/docs/ExLlama.md).')
|
shared.gradio['exllama_info'] = gr.Markdown('For more information, consult the [docs](https://github.com/oobabooga/text-generation-webui/blob/main/docs/ExLlama.md).')
|
||||||
shared.gradio['exllama_HF_info'] = gr.Markdown('ExLlama_HF is a wrapper that lets you use ExLlama like a Transformers model, which means it can use the Transformers samplers. It\'s a bit slower than the regular ExLlama.')
|
shared.gradio['exllama_HF_info'] = gr.Markdown('ExLlama_HF is a wrapper that lets you use ExLlama like a Transformers model, which means it can use the Transformers samplers. It\'s a bit slower than the regular ExLlama.')
|
||||||
shared.gradio['llamacpp_HF_info'] = gr.Markdown('llamacpp_HF is a wrapper that lets you use llama.cpp like a Transformers model, which means it can use the Transformers samplers. To use it, make sure to first download oobabooga/llama-tokenizer under "Download custom model or LoRA".')
|
shared.gradio['llamacpp_HF_info'] = gr.Markdown('llamacpp_HF is a wrapper that lets you use llama.cpp like a Transformers model, which means it can use the Transformers samplers. To use it, make sure to first download oobabooga/llama-tokenizer under "Download custom model or LoRA".')
|
||||||
|
Loading…
Reference in New Issue
Block a user