mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 14:20:31 +01:00
75cd4c7729
* ci: bench: support sse and fix prompt processing time server: add tokens usage in stream mode * ci: bench: README.md EOL * ci: bench: remove total pp and tg as it is not accurate * ci: bench: fix case when there is no token generated * ci: bench: change to the 95 percentile for pp and tg as it is closer to what the server exports in metrics * ci: bench: fix finish reason rate |
||
---|---|---|
.. | ||
bench.yml | ||
build.yml | ||
close-issue.yml | ||
code-coverage.yml | ||
docker.yml | ||
editorconfig.yml | ||
gguf-publish.yml | ||
nix-ci-aarch64.yml | ||
nix-ci.yml | ||
nix-flake-update.yml | ||
nix-publish-flake.yml | ||
python-check-requirements.yml | ||
python-lint.yml | ||
server.yml | ||
zig-build.yml |