Add note about AMD/Metal to README

This commit is contained in:
oobabooga 2023-08-18 09:37:20 -07:00
parent c4733000d7
commit f50f534b0f

View File

@ -88,7 +88,19 @@ cd text-generation-webui
pip install -r requirements.txt pip install -r requirements.txt
``` ```
#### Note about older NVIDIA GPUs #### llama.cpp on AMD, Metal, and some specific CPUs
Precompiled wheels are included for CPU-only and NVIDIA GPUs (cuBLAS). For AMD, Metal, and some specific CPUs, you need to uninstall those wheels and compile llama-cpp-python yourself.
To uninstall:
```
pip uninstall -y llama-cpp-python llama-cpp-python-cuda
```
To compile: https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast--metal
#### bitsandbytes on older NVIDIA GPUs
bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this: bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this: