mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-28 15:18:26 +01:00
de280085e7
* fix export-lora example * add more logging * reject merging subset * better check * typo
26 lines
969 B
Markdown
26 lines
969 B
Markdown
# export-lora
|
|
|
|
Apply LORA adapters to base model and export the resulting model.
|
|
|
|
```
|
|
usage: llama-export-lora [options]
|
|
|
|
options:
|
|
-m, --model model path from which to load base model (default '')
|
|
--lora FNAME path to LoRA adapter (can be repeated to use multiple adapters)
|
|
--lora-scaled FNAME S path to LoRA adapter with user defined scaling S (can be repeated to use multiple adapters)
|
|
-t, --threads N number of threads to use during computation (default: 4)
|
|
-o, --output FNAME output file (default: 'ggml-lora-merged-f16.gguf')
|
|
```
|
|
|
|
For example:
|
|
|
|
```bash
|
|
./bin/llama-export-lora \
|
|
-m open-llama-3b-v2-q8_0.gguf \
|
|
-o open-llama-3b-v2-q8_0-english2tokipona-chat.gguf \
|
|
--lora lora-open-llama-3b-v2-q8_0-english2tokipona-chat-LATEST.bin
|
|
```
|
|
|
|
Multiple LORA adapters can be applied by passing multiple `--lora FNAME` or `--lora-scaled FNAME S` command line parameters.
|