Update Using-LoRAs.md

This commit is contained in:
oobabooga 2023-06-01 11:32:41 -03:00 committed by GitHub
parent df18ae7d6c
commit 9aad6d07de
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -8,15 +8,16 @@ Based on https://github.com/tloen/alpaca-lora
python download-model.py tloen/alpaca-lora-7b python download-model.py tloen/alpaca-lora-7b
``` ```
2. Load the LoRA. 16-bit, 8-bit, and CPU modes work: 2. Load the LoRA. 16-bit, `--load-in-8bit`, `--load-in-4bit`, and CPU modes work:
``` ```
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu
``` ```
* For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode). * For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode).
* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface. * Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.