mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Improved instructions for AMD/Metal/Intel Arc/CPUs without AVCX2
This commit is contained in:
parent
b2f7ca0d18
commit
bc4023230b
26
README.md
26
README.md
@ -75,11 +75,6 @@ conda activate textgen
|
||||
|
||||
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
|
||||
|
||||
#### 2.1 Additional information
|
||||
|
||||
* MacOS users: https://github.com/oobabooga/text-generation-webui/pull/393
|
||||
* AMD users: https://rentry.org/eq3hg
|
||||
|
||||
#### 3. Install the web UI
|
||||
|
||||
```
|
||||
@ -88,17 +83,26 @@ cd text-generation-webui
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
#### llama.cpp on AMD, Metal, and some specific CPUs
|
||||
#### AMD, Metal, Intel Arc, and CPUs without AVCX2
|
||||
|
||||
Precompiled wheels are included for CPU-only and NVIDIA GPUs (cuBLAS). For AMD, Metal, and some specific CPUs, you need to uninstall those wheels and compile llama-cpp-python yourself.
|
||||
|
||||
To uninstall:
|
||||
1) Replace the last command above with
|
||||
|
||||
```
|
||||
pip uninstall -y llama-cpp-python llama-cpp-python-cuda
|
||||
pip install -r requirements_nocuda.txt
|
||||
```
|
||||
|
||||
To compile: https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast--metal
|
||||
2) Manually install llama-cpp-python using the appropriate command for your hardware: [Installation from PyPI](https://github.com/abetlen/llama-cpp-python#installation-from-pypi).
|
||||
|
||||
3) AMD: Manually install AutoGPTQ: [Installation](https://github.com/PanQiWei/AutoGPTQ#installation).
|
||||
|
||||
4) AMD: Manually install ExLlama by simply cloning it into the `repositories` folder (it will be automatically compiled at runtime after that):
|
||||
|
||||
```
|
||||
cd text-generation-webui
|
||||
mkdir repositories
|
||||
cd repositories
|
||||
git clone https://github.com/turboderp/exllama
|
||||
```
|
||||
|
||||
#### bitsandbytes on older NVIDIA GPUs
|
||||
|
||||
|
27
requirements_nocuda.txt
Normal file
27
requirements_nocuda.txt
Normal file
@ -0,0 +1,27 @@
|
||||
aiofiles==23.1.0
|
||||
fastapi==0.95.2
|
||||
gradio_client==0.2.5
|
||||
gradio==3.33.1
|
||||
|
||||
accelerate==0.22.*
|
||||
colorama
|
||||
datasets
|
||||
einops
|
||||
markdown
|
||||
numpy==1.24
|
||||
pandas
|
||||
peft==0.5.*
|
||||
Pillow>=9.5.0
|
||||
pyyaml
|
||||
requests
|
||||
safetensors==0.3.2
|
||||
transformers==4.32.*
|
||||
scipy
|
||||
sentencepiece
|
||||
tensorboard
|
||||
tqdm
|
||||
wandb
|
||||
|
||||
# bitsandbytes
|
||||
bitsandbytes==0.41.1; platform_system != "Windows"
|
||||
https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl; platform_system == "Windows"
|
Loading…
Reference in New Issue
Block a user