mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Remove obsolete information from README
This commit is contained in:
parent
4b19b74e6c
commit
6415cc68a2
@ -90,10 +90,6 @@ cd text-generation-webui
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
#### llama.cpp with GPU acceleration
|
||||
|
||||
Requires the additional compilation step described here: [GPU acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration).
|
||||
|
||||
#### bitsandbytes
|
||||
|
||||
bitsandbytes >= 0.39 may not work on older NVIDIA GPUs. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
||||
|
Loading…
Reference in New Issue
Block a user