mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-10-31 15:10:16 +01:00
b804b1ef77
* gguf-debug: Example how to use ggml callback for debugging * gguf-debug: no mutex, verify type, fix stride. * llama: cv eval: move cb eval field in common gpt_params * ggml_debug: use common gpt_params to pass cb eval. Fix get tensor SIGV random. * ggml_debug: ci: add tests * ggml_debug: EOL in CMakeLists.txt * ggml_debug: Remove unused param n_batch, no batching here * ggml_debug: fix trailing spaces * ggml_debug: fix trailing spaces * common: fix cb_eval and user data not initialized * ci: build revert label * ggml_debug: add main test label * doc: add a model: add a link to ggml-debug * ggml-debug: add to make toolchain * ggml-debug: tests add the main label * ggml-debug: ci add test curl label * common: allow the warmup to be disabled in llama_init_from_gpt_params * ci: add curl test * ggml-debug: better tensor type support * gitignore : ggml-debug * ggml-debug: printing also the sum of each tensor * ggml-debug: remove block size * eval-callback: renamed from ggml-debug * eval-callback: fix make toolchain --------- Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
bench.yml | ||
build.yml | ||
close-issue.yml | ||
code-coverage.yml | ||
docker.yml | ||
editorconfig.yml | ||
gguf-publish.yml | ||
nix-ci-aarch64.yml | ||
nix-ci.yml | ||
nix-flake-update.yml | ||
nix-publish-flake.yml | ||
python-check-requirements.yml | ||
python-lint.yml | ||
server.yml | ||
zig-build.yml |