.. |
build-info.cmake
|
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
|
2023-11-27 21:25:42 +02:00 |
build-info.sh
|
build : link against build info instead of compiling against it (#3879)
|
2023-11-02 08:50:16 +02:00 |
check-requirements.sh
|
Move convert.py to examples/convert-legacy-llama.py (#7430)
|
2024-05-30 21:40:00 +10:00 |
ci-run.sh
|
ci : add model tests + script wrapper (#4586)
|
2024-01-26 14:18:00 +02:00 |
compare-commits.sh
|
llama-bench : allow using a different printer for stderr with -oe (#7722)
|
2024-06-04 14:32:42 +02:00 |
compare-llama-bench.py
|
ggml : remove OpenCL (#7735)
|
2024-06-04 21:23:20 +03:00 |
convert-gg.sh
|
Move convert.py to examples/convert-legacy-llama.py (#7430)
|
2024-05-30 21:40:00 +10:00 |
debug-test.sh
|
Added a single test function script and fix debug-test.sh to be more robust (#7279)
|
2024-05-17 22:40:14 +10:00 |
gen-authors.sh
|
license : update copyright notice + add AUTHORS (#6405)
|
2024-04-09 09:23:19 +03:00 |
gen-build-info-cpp.cmake
|
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
|
2023-11-27 21:25:42 +02:00 |
gen-unicode-data.py
|
tokenizer : BPE fixes (#7530)
|
2024-06-18 18:40:52 +02:00 |
get-flags.mk
|
build : pass all warning flags to nvcc via -Xcompiler (#5570)
|
2024-02-18 16:21:52 -05:00 |
get-hellaswag.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
get-pg.sh
|
scripts : improve get-pg.sh (#4838)
|
2024-01-09 19:21:13 +02:00 |
get-wikitext-2.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
get-wikitext-103.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
get-winogrande.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
hf.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
install-oneapi.bat
|
support SYCL backend windows build (#5208)
|
2024-01-31 08:08:07 +05:30 |
LlamaConfig.cmake.in
|
ggml : remove OpenCL (#7735)
|
2024-06-04 21:23:20 +03:00 |
pod-llama.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
qnt-all.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
run-all-perf.sh
|
scripts : add pipefail
|
2023-08-29 10:50:30 +03:00 |
run-all-ppl.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
run-with-preset.py
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
server-llm.sh
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
sync-ggml-am.sh
|
ggml : remove OpenCL (#7735)
|
2024-06-04 21:23:20 +03:00 |
sync-ggml.last
|
ggml : sync
|
2024-06-18 09:50:45 +03:00 |
sync-ggml.sh
|
ggml : remove OpenCL (#7735)
|
2024-06-04 21:23:20 +03:00 |
verify-checksum-models.py
|
convert.py : add python logging instead of print() (#6511)
|
2024-05-03 22:36:41 +03:00 |
xxd.cmake
|
build : generate hex dump of server assets during build (#6661)
|
2024-04-21 18:48:53 +01:00 |