From 9aad6d07de0dc6bb9f78e7c59bbd14176dc89251 Mon Sep 17 00:00:00 2001 From: oobabooga <112222186+oobabooga@users.noreply.github.com> Date: Thu, 1 Jun 2023 11:32:41 -0300 Subject: [PATCH] Update Using-LoRAs.md --- docs/Using-LoRAs.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/Using-LoRAs.md b/docs/Using-LoRAs.md index fafd6cde..39ec0b89 100644 --- a/docs/Using-LoRAs.md +++ b/docs/Using-LoRAs.md @@ -8,15 +8,16 @@ Based on https://github.com/tloen/alpaca-lora python download-model.py tloen/alpaca-lora-7b ``` -2. Load the LoRA. 16-bit, 8-bit, and CPU modes work: +2. Load the LoRA. 16-bit, `--load-in-8bit`, `--load-in-4bit`, and CPU modes work: ``` python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit +python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu ``` -* For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). +* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). * Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.