diff --git a/docs/Using-LoRAs.md b/docs/Using-LoRAs.md index 39ec0b89..ec060cac 100644 --- a/docs/Using-LoRAs.md +++ b/docs/Using-LoRAs.md @@ -17,7 +17,7 @@ python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu ``` -* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). +* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-with-gptq-for-llama). * Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.