llama.cpp/scripts
Jon Haus f77ea24a1a AMD: parse the architecture as supplied by gcnArchName
The value provided by minor is truncated for AMD, parse the value returned by gcnArchName instead to retrieve an accurate ID.

We can also use the common value for GCN4, as gfx800, to avoid missing compatible devices.
2025-01-18 15:34:10 -05:00
..
build-info.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
check-requirements.sh py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
ci-run.sh ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
compare-commits.sh scripts : change build path to "build-bench" for compare-commits.sh (#10836) 2024-12-15 18:44:47 +02:00
compare-llama-bench.py ggml : more perfo with llamafile tinyblas on x86_64 (#10714) 2024-12-24 18:54:49 +01:00
debug-test.sh scripts : fix spelling typo in messages and comments (#9782) 2024-10-08 09:19:53 +03:00
fetch-amd-ids.py AMD: parse the architecture as supplied by gcnArchName 2025-01-18 15:34:10 -05:00
gen-authors.sh license : update copyright notice + add AUTHORS (#6405) 2024-04-09 09:23:19 +03:00
gen-unicode-data.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
get-flags.mk build : pass all warning flags to nvcc via -Xcompiler (#5570) 2024-02-18 16:21:52 -05:00
get-hellaswag.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
get-pg.sh scripts : improve get-pg.sh (#4838) 2024-01-09 19:21:13 +02:00
get-wikitext-2.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
get-wikitext-103.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
get-winogrande.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
install-oneapi.bat support SYCL backend windows build (#5208) 2024-01-31 08:08:07 +05:30
qnt-all.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
run-all-perf.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-ppl.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
sync-ggml-am.sh scripts : sync gguf (cont) 2025-01-14 09:40:52 +02:00
sync-ggml.last sync : ggml 2025-01-14 10:39:42 +02:00
sync-ggml.sh scripts : sync gguf 2025-01-14 09:36:58 +02:00
verify-checksum-models.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
xxd.cmake build: generate hex dump of server assets during build (#6661) 2024-04-21 18:48:53 +01:00