mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Update README.md
This commit is contained in:
parent
74bf2f05b1
commit
3347395944
@ -103,7 +103,11 @@ To use GPTQ models, the additional installation steps below are necessary:
|
||||
|
||||
[GPTQ models (4 bit mode)](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md)
|
||||
|
||||
#### Note about bitsandbytes
|
||||
#### llama.cpp with GPU acceleration
|
||||
|
||||
Requires the additional compilation step described here: [GPU offloading](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-offloading).
|
||||
|
||||
#### bitsandbytes
|
||||
|
||||
bitsandbytes >= 0.39 may not work on older NVIDIA GPUs. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user