This website requires JavaScript.
Explore
Help
Register
Sign In
Mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-10 04:20:24 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
llama.cpp
/
scripts
History
Georgi Gerganov
efd8533ef8
sync : ggml
...
ggml-ci
2024-03-04 20:54:23 +02:00
..
build-info.cmake
…
build-info.sh
…
check-requirements.sh
…
ci-run.sh
…
compare-commits.sh
scripts : add helpers script for bench comparing commits (
#5521
)
2024-02-16 15:14:40 +02:00
compare-llama-bench.py
llama : cleanup unused mmq flags (
#5772
)
2024-03-01 13:39:06 +02:00
convert-gg.sh
…
gen-build-info-cpp.cmake
…
get-flags.mk
build : pass all warning flags to nvcc via -Xcompiler (
#5570
)
2024-02-18 16:21:52 -05:00
get-hellaswag.sh
…
get-pg.sh
…
get-wikitext-2.sh
ci : fix wikitext url + compile warnings (
#5569
)
2024-02-18 22:39:30 +02:00
get-winogrande.sh
…
hf.sh
…
install-oneapi.bat
…
LlamaConfig.cmake.in
…
pod-llama.sh
scripts : add pod-llama.sh
2024-03-02 16:54:20 +02:00
qnt-all.sh
…
run-all-perf.sh
…
run-all-ppl.sh
…
run-with-preset.py
…
server-llm.sh
…
sync-ggml-am.sh
…
sync-ggml.last
sync : ggml
2024-03-04 20:54:23 +02:00
sync-ggml.sh
…
verify-checksum-models.py
…