From c9ac45d4cf68ba036c82949ef26bdf6b5fb1a14b Mon Sep 17 00:00:00 2001 From: oobabooga <112222186+oobabooga@users.noreply.github.com> Date: Thu, 1 Jun 2023 11:34:04 -0300 Subject: [PATCH] Update Using-LoRAs.md --- docs/Using-LoRAs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Using-LoRAs.md b/docs/Using-LoRAs.md index 39ec0b89..ec060cac 100644 --- a/docs/Using-LoRAs.md +++ b/docs/Using-LoRAs.md @@ -17,7 +17,7 @@ python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu ``` -* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). +* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-with-gptq-for-llama). * Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.