mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 13:58:46 +01:00
d01bccde9f
* ci : run ctest ggml-ci * ci : add open llama 3B-v2 tests ggml-ci * ci : disable wget progress output ggml-ci * ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations ggml-ci * tests : try to fix tail free sampling test ggml-ci * ci : add K-quants ggml-ci * ci : add short perplexity tests ggml-ci * ci : add README.md * ppl : add --chunks argument to limit max number of chunks ggml-ci * ci : update README |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
test-double-float.c | ||
test-grad0.c | ||
test-opt.c | ||
test-quantize-fns.cpp | ||
test-quantize-perf.cpp | ||
test-sampling.cpp | ||
test-tokenizer-0.cpp |