mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 16:17:57 +01:00
Update Using-LoRAs.md
This commit is contained in:
parent
df18ae7d6c
commit
9aad6d07de
@ -8,15 +8,16 @@ Based on https://github.com/tloen/alpaca-lora
|
|||||||
python download-model.py tloen/alpaca-lora-7b
|
python download-model.py tloen/alpaca-lora-7b
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Load the LoRA. 16-bit, 8-bit, and CPU modes work:
|
2. Load the LoRA. 16-bit, `--load-in-8bit`, `--load-in-4bit`, and CPU modes work:
|
||||||
|
|
||||||
```
|
```
|
||||||
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b
|
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b
|
||||||
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit
|
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit
|
||||||
|
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit
|
||||||
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu
|
python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu
|
||||||
```
|
```
|
||||||
|
|
||||||
* For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode).
|
* For using LoRAs with GPTQ quantized models, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode).
|
||||||
|
|
||||||
* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.
|
* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user