From 1f5ec2c0b42095127296aa3597753541f9d8955d Mon Sep 17 00:00:00 2001 From: HanClinto Date: Mon, 10 Jun 2024 16:12:50 -0700 Subject: [PATCH] Updating two small `main` references missed earlier in the finetune docs. --- examples/finetune/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/finetune/README.md b/examples/finetune/README.md index ea72f5d95..a6ae64983 100644 --- a/examples/finetune/README.md +++ b/examples/finetune/README.md @@ -38,9 +38,9 @@ After 10 more iterations: Checkpoint files (`--checkpoint-in FN`, `--checkpoint-out FN`) store the training process. When the input checkpoint file does not exist, it will begin finetuning a new randomly initialized adapter. llama.cpp compatible LORA adapters will be saved with filename specified by `--lora-out FN`. -These LORA adapters can then be used by `main` together with the base model, like in the 'predict' example command above. +These LORA adapters can then be used by `llama-cli` together with the base model, like in the 'predict' example command above. -In `main` you can also load multiple LORA adapters, which will then be mixed together. +In `llama-cli` you can also load multiple LORA adapters, which will then be mixed together. For example if you have two LORA adapters `lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.bin` and `lora-open-llama-3b-v2-q8_0-bible-LATEST.bin`, you can mix them together like this: