mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-01 15:40:21 +01:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4
.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
42 lines
1.3 KiB
Bash
Executable File
42 lines
1.3 KiB
Bash
Executable File
#!/bin/bash
|
|
|
|
set -e
|
|
|
|
cd "$(dirname "$0")/.." || exit
|
|
|
|
MODEL="${MODEL:-./models/ggml-vic13b-uncensored-q5_0.bin}"
|
|
PROMPT_TEMPLATE=${PROMPT_TEMPLATE:-./prompts/chat.txt}
|
|
USER_NAME="### Human"
|
|
AI_NAME="### Assistant"
|
|
|
|
# Adjust to the number of CPU cores you want to use.
|
|
N_THREAD="${N_THREAD:-8}"
|
|
# Number of tokens to predict (made it larger than default because we want a long interaction)
|
|
N_PREDICTS="${N_PREDICTS:-2048}"
|
|
|
|
# Note: you can also override the generation options by specifying them on the command line:
|
|
# For example, override the context size by doing: ./chatLLaMa --ctx_size 1024
|
|
GEN_OPTIONS="${GEN_OPTIONS:---ctx_size 2048 --temp 0.7 --top_k 40 --top_p 0.5 --repeat_last_n 256 --batch_size 1024 --repeat_penalty 1.17647}"
|
|
|
|
DATE_TIME=$(date +%H:%M)
|
|
DATE_YEAR=$(date +%Y)
|
|
|
|
PROMPT_FILE=$(mktemp -t llamacpp_prompt.XXXXXXX.txt)
|
|
|
|
sed -e "s/\[\[USER_NAME\]\]/$USER_NAME/g" \
|
|
-e "s/\[\[AI_NAME\]\]/$AI_NAME/g" \
|
|
-e "s/\[\[DATE_TIME\]\]/$DATE_TIME/g" \
|
|
-e "s/\[\[DATE_YEAR\]\]/$DATE_YEAR/g" \
|
|
$PROMPT_TEMPLATE > $PROMPT_FILE
|
|
|
|
# shellcheck disable=SC2086 # Intended splitting of GEN_OPTIONS
|
|
./bin/llama-cli $GEN_OPTIONS \
|
|
--model "$MODEL" \
|
|
--threads "$N_THREAD" \
|
|
--n_predict "$N_PREDICTS" \
|
|
--color --interactive \
|
|
--file ${PROMPT_FILE} \
|
|
--reverse-prompt "### Human:" \
|
|
--in-prefix ' ' \
|
|
"$@"
|