This website requires JavaScript.
Explore
Help
Register
Sign In
Mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-10 12:30:50 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
llama.cpp
/
tests
History
Georgi Gerganov
20d7740a9b
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
..
CMakeLists.txt
ggml : change ggml_graph_compute() API to not require context (
#1999
)
2023-07-07 19:24:01 +03:00
test-double-float.c
all : be more strict about converting float to double (
#458
)
2023-03-28 19:48:20 +03:00
test-grad0.c
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
test-opt.c
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
test-quantize-fns.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-quantize-perf.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-sampling.cpp
llama : fix top-p sampling to match the canonical definition (
#1953
)
2023-06-24 13:15:01 +03:00
test-tokenizer-0.cpp
mpi : add support for distributed inference via MPI (
#2099
)
2023-07-10 18:49:56 +03:00