Updating docs for eval-callback binary to use new llama- prefix.

This commit is contained in:
HanClinto 2024-06-10 14:41:56 -07:00
parent 0be5f399c4
commit e7e03733b2
2 changed files with 2 additions and 2 deletions

View File

@ -100,7 +100,7 @@ Have a look at existing implementation like `build_llama`, `build_dbrx` or `buil
When implementing a new graph, please note that the underlying `ggml` backends might not support them all, support for missing backend operations can be added in another PR.
Note: to debug the inference graph: you can use [eval-callback](../examples/eval-callback).
Note: to debug the inference graph: you can use [llama-eval-callback](../examples/eval-callback).
## GGUF specification

View File

@ -6,7 +6,7 @@ It simply prints to the console all operations and tensor data.
Usage:
```shell
eval-callback \
llama-eval-callback \
--hf-repo ggml-org/models \
--hf-file phi-2/ggml-model-q4_0.gguf \
--model phi-2-q4_0.gguf \