.. |
build-info.cmake
|
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
|
2023-11-27 21:25:42 +02:00 |
build-info.sh
|
build : link against build info instead of compiling against it (#3879)
|
2023-11-02 08:50:16 +02:00 |
check-requirements.sh
|
llama : fix BPE pre-tokenization (#6920)
|
2024-04-29 16:58:41 +03:00 |
ci-run.sh
|
ci : add model tests + script wrapper (#4586)
|
2024-01-26 14:18:00 +02:00 |
compare-commits.sh
|
ggml : group all experts in a single ggml_mul_mat_id (#6505)
|
2024-04-18 15:18:48 +02:00 |
compare-llama-bench.py
|
llama-bench : add pp+tg test type (#7199)
|
2024-05-10 18:03:54 +02:00 |
convert-gg.sh
|
scripts : helper convert script
|
2023-08-27 15:24:58 +03:00 |
debug-test.sh
|
Added a single test function script and fix debug-test.sh to be more robust (#7279)
|
2024-05-17 22:40:14 +10:00 |
gen-authors.sh
|
license : update copyright notice + add AUTHORS (#6405)
|
2024-04-09 09:23:19 +03:00 |
gen-build-info-cpp.cmake
|
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
|
2023-11-27 21:25:42 +02:00 |
gen-unicode-data.py
|
Unicode codepoint flags for custom regexs (#7245)
|
2024-05-18 01:09:13 +02:00 |
get-flags.mk
|
build : pass all warning flags to nvcc via -Xcompiler (#5570)
|
2024-02-18 16:21:52 -05:00 |
get-hellaswag.sh
|
scripts : add get-winogrande.sh
|
2024-01-18 20:45:39 +02:00 |
get-pg.sh
|
scripts : improve get-pg.sh (#4838)
|
2024-01-09 19:21:13 +02:00 |
get-wikitext-2.sh
|
model: support arch DbrxForCausalLM (#6515)
|
2024-04-13 11:33:52 +02:00 |
get-wikitext-103.sh
|
lookup: complement data from context with general text statistics (#5479)
|
2024-03-23 01:24:36 +01:00 |
get-winogrande.sh
|
scripts : add get-winogrande.sh
|
2024-01-18 20:45:39 +02:00 |
hf.sh
|
scripts : add --outdir option to hf.sh (#6600)
|
2024-04-11 16:22:47 +03:00 |
install-oneapi.bat
|
support SYCL backend windows build (#5208)
|
2024-01-31 08:08:07 +05:30 |
LlamaConfig.cmake.in
|
llama : remove MPI backend (#7395)
|
2024-05-20 01:17:03 +02:00 |
pod-llama.sh
|
cuda : rename build flag to LLAMA_CUDA (#6299)
|
2024-03-26 01:16:01 +01:00 |
qnt-all.sh
|
scripts : add pipefail
|
2023-08-29 10:50:30 +03:00 |
run-all-perf.sh
|
scripts : add pipefail
|
2023-08-29 10:50:30 +03:00 |
run-all-ppl.sh
|
scripts : add pipefail
|
2023-08-29 10:50:30 +03:00 |
run-with-preset.py
|
convert.py : add python logging instead of print() (#6511)
|
2024-05-03 22:36:41 +03:00 |
server-llm.sh
|
cuda : rename build flag to LLAMA_CUDA (#6299)
|
2024-03-26 01:16:01 +01:00 |
sync-ggml-am.sh
|
script : sync ggml-rpc
|
2024-05-14 19:14:38 +03:00 |
sync-ggml.last
|
sync : ggml
|
2024-05-29 14:29:52 +03:00 |
sync-ggml.sh
|
script : sync ggml-rpc
|
2024-05-14 19:14:38 +03:00 |
verify-checksum-models.py
|
convert.py : add python logging instead of print() (#6511)
|
2024-05-03 22:36:41 +03:00 |
xxd.cmake
|
build : generate hex dump of server assets during build (#6661)
|
2024-04-21 18:48:53 +01:00 |