mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-25 17:29:22 +01:00
Add note about AMD/Metal to README
This commit is contained in:
parent
c4733000d7
commit
f50f534b0f
14
README.md
14
README.md
@ -88,7 +88,19 @@ cd text-generation-webui
|
|||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Note about older NVIDIA GPUs
|
#### llama.cpp on AMD, Metal, and some specific CPUs
|
||||||
|
|
||||||
|
Precompiled wheels are included for CPU-only and NVIDIA GPUs (cuBLAS). For AMD, Metal, and some specific CPUs, you need to uninstall those wheels and compile llama-cpp-python yourself.
|
||||||
|
|
||||||
|
To uninstall:
|
||||||
|
|
||||||
|
```
|
||||||
|
pip uninstall -y llama-cpp-python llama-cpp-python-cuda
|
||||||
|
```
|
||||||
|
|
||||||
|
To compile: https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast--metal
|
||||||
|
|
||||||
|
#### bitsandbytes on older NVIDIA GPUs
|
||||||
|
|
||||||
bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user