llama.cpp/scripts
slaren adc9ff3841
llama-bench : allow using a different printer for stderr with -oe (#7722)
compare-commits.sh : hide stdout, use -oe to print markdown
2024-06-04 14:32:42 +02:00
..
build-info.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
build-info.sh build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
check-requirements.sh Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00
ci-run.sh ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
compare-commits.sh llama-bench : allow using a different printer for stderr with -oe (#7722) 2024-06-04 14:32:42 +02:00
compare-llama-bench.py scripts: update compare_llama_bench.py [no ci] (#7673) 2024-05-31 16:26:21 +02:00
convert-gg.sh Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00
debug-test.sh Added a single test function script and fix debug-test.sh to be more robust (#7279) 2024-05-17 22:40:14 +10:00
gen-authors.sh license : update copyright notice + add AUTHORS (#6405) 2024-04-09 09:23:19 +03:00
gen-build-info-cpp.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
gen-unicode-data.py Unicode codepoint flags for custom regexs (#7245) 2024-05-18 01:09:13 +02:00
get-flags.mk build : pass all warning flags to nvcc via -Xcompiler (#5570) 2024-02-18 16:21:52 -05:00
get-hellaswag.sh scripts : add get-winogrande.sh 2024-01-18 20:45:39 +02:00
get-pg.sh scripts : improve get-pg.sh (#4838) 2024-01-09 19:21:13 +02:00
get-wikitext-2.sh model: support arch DbrxForCausalLM (#6515) 2024-04-13 11:33:52 +02:00
get-wikitext-103.sh lookup: complement data from context with general text statistics (#5479) 2024-03-23 01:24:36 +01:00
get-winogrande.sh scripts : add get-winogrande.sh 2024-01-18 20:45:39 +02:00
hf.sh scripts : add --outdir option to hf.sh (#6600) 2024-04-11 16:22:47 +03:00
install-oneapi.bat support SYCL backend windows build (#5208) 2024-01-31 08:08:07 +05:30
LlamaConfig.cmake.in llama : remove MPI backend (#7395) 2024-05-20 01:17:03 +02:00
pod-llama.sh Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00
qnt-all.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-perf.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-ppl.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-with-preset.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
server-llm.sh cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
sync-ggml-am.sh scripts : remove mpi remnants 2024-05-29 14:31:18 +03:00
sync-ggml.last sync : ggml 2024-05-29 14:29:52 +03:00
sync-ggml.sh scripts : remove mpi remnants 2024-05-29 14:31:18 +03:00
verify-checksum-models.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
xxd.cmake build: generate hex dump of server assets during build (#6661) 2024-04-21 18:48:53 +01:00