mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-10 20:40:24 +01:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew * server: update refs -> llama-server gitignore llama-server * server: simplify nix package * main: update refs -> llama fix examples/main ref * main/server: fix targets * update more names * Update build.yml * rm accidentally checked in bins * update straggling refs * Update .gitignore * Update server-llm.sh * main: target name -> llama-cli * Prefix all example bins w/ llama- * fix main refs * rename {main->llama}-cmake-pkg binary * prefix more cmake targets w/ llama- * add/fix gbnf-validator subfolder to cmake * sort cmake example subdirs * rm bin files * fix llama-lookup-* Makefile rules * gitignore /llama-* * rename Dockerfiles * rename llama|main -> llama-cli; consistent RPM bin prefixes * fix some missing -cli suffixes * rename dockerfile w/ llama-cli * rename(make): llama-baby-llama * update dockerfile refs * more llama-cli(.exe) * fix test-eval-callback * rename: llama-cli-cmake-pkg(.exe) * address gbnf-validator unused fread warning (switched to C++ / ifstream) * add two missing llama- prefixes * Updating docs for eval-callback binary to use new `llama-` prefix. * Updating a few lingering doc references for rename of main to llama-cli * Updating `run-with-preset.py` to use new binary names. Updating docs around `perplexity` binary rename. * Updating documentation references for lookup-merge and export-lora * Updating two small `main` references missed earlier in the finetune docs. * Update apps.nix * update grammar/README.md w/ new llama-* names * update llama-rpc-server bin name + doc * Revert "update llama-rpc-server bin name + doc" This reverts commit e474ef1df481fd8936cd7d098e3065d7de378930. * add hot topic notice to README.md * Update README.md * Update README.md * rename gguf-split & quantize bins refs in **/tests.sh --------- Co-authored-by: HanClinto <hanclinto@gmail.com>
35 lines
1.1 KiB
Bash
35 lines
1.1 KiB
Bash
#!/bin/bash
|
|
cd `dirname $0`
|
|
cd ../..
|
|
|
|
EXE="./llama-finetune"
|
|
|
|
if [[ ! $LLAMA_MODEL_DIR ]]; then LLAMA_MODEL_DIR="./models"; fi
|
|
if [[ ! $LLAMA_TRAINING_DIR ]]; then LLAMA_TRAINING_DIR="."; fi
|
|
|
|
# MODEL="$LLAMA_MODEL_DIR/openllama-3b-v2-q8_0.gguf" # This is the model the readme uses.
|
|
MODEL="$LLAMA_MODEL_DIR/openllama-3b-v2.gguf" # An f16 model. Note in this case with "-g", you get an f32-format .BIN file that isn't yet supported if you use it with "main --lora" with GPU inferencing.
|
|
|
|
while getopts "dg" opt; do
|
|
case $opt in
|
|
d)
|
|
DEBUGGER="gdb --args"
|
|
;;
|
|
g)
|
|
EXE="./build/bin/Release/finetune"
|
|
GPUARG="--gpu-layers 25"
|
|
;;
|
|
esac
|
|
done
|
|
|
|
$DEBUGGER $EXE \
|
|
--model-base $MODEL \
|
|
$GPUARG \
|
|
--checkpoint-in chk-ol3b-shakespeare-LATEST.gguf \
|
|
--checkpoint-out chk-ol3b-shakespeare-ITERATION.gguf \
|
|
--lora-out lora-ol3b-shakespeare-ITERATION.bin \
|
|
--train-data "$LLAMA_TRAINING_DIR\shakespeare.txt" \
|
|
--save-every 10 \
|
|
--threads 10 --adam-iter 30 --batch 4 --ctx 64 \
|
|
--use-checkpointing
|