llama.cpp/tests
2024-03-04 20:42:48 +02:00
..
.gitignore tests : .gitignore obj files 2024-02-08 09:46:47 +02:00
CMakeLists.txt llama : add llama_chat_apply_template() (#5538) 2024-02-19 10:23:37 +02:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp Merge branch 'master' into gg/flash-attn 2024-03-04 20:42:48 +02:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-template.cpp Add Gemma chat template (#5665) 2024-02-22 19:10:21 +01:00
test-double-float.cpp ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861) 2023-10-30 19:19:15 +02:00
test-grad0.cpp cuda : improve cuda pool efficiency using virtual memory (#4606) 2023-12-24 14:34:22 +01:00
test-grammar-parser.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-llama-grammar.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
test-quantize-fns.cpp Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721) 2024-02-26 18:28:38 +02:00
test-quantize-perf.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-rope.cpp llama : custom attention mask + parallel decoding + no context swaps (#3228) 2023-09-28 19:04:36 +03:00
test-sampling.cpp sampling: fix top_k <= 0 (#5388) 2024-02-08 09:46:30 +01:00
test-tokenizer-0-falcon.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-falcon.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-0-llama.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-llama.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-1-bpe.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-1-llama.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00