diff --git a/docs/Using-LoRAs.md b/docs/Using-LoRAs.md index fafd6cde..39ec0b89 100644 --- a/docs/Using-LoRAs.md +++ b/docs/Using-LoRAs.md @@ -8,15 +8,16 @@ Based on https://github.com/tloen/alpaca-lora python download-model.py tloen/alpaca-lora-7b ``` -2. Load the LoRA. 16-bit, 8-bit, and CPU modes work: +2. Load the LoRA. 16-bit, `--load-in-8bit`, `--load-in-4bit`, and CPU modes work: ``` python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit +python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu ``` -* For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). +* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). * Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.