Update Using-LoRAs.md

This commit is contained in:
oobabooga 2023-06-01 11:34:04 -03:00 committed by GitHub
parent 9aad6d07de
commit c9ac45d4cf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -17,7 +17,7 @@ python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu
``` ```
* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). * For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-with-gptq-for-llama).
* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface. * Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.