mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-28 18:48:04 +01:00
Update Using-LoRAs.md
This commit is contained in:
parent
9aad6d07de
commit
c9ac45d4cf
@ -17,7 +17,7 @@ python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit
|
|||||||
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu
|
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu
|
||||||
```
|
```
|
||||||
|
|
||||||
* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode).
|
* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-with-gptq-for-llama).
|
||||||
|
|
||||||
* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.
|
* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user