This website requires JavaScript.
Explore
Help
Register
Sign In
Mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-10 12:30:50 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
llama.cpp
/
tests
History
wzy
b1f4290953
cmake : install targets (
#2256
)
...
fix
#2252
2023-07-19 10:01:11 +03:00
..
CMakeLists.txt
cmake : install targets (
#2256
)
2023-07-19 10:01:11 +03:00
test-double-float.c
all : be more strict about converting float to double (
#458
)
2023-03-28 19:48:20 +03:00
test-grad0.c
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
test-opt.c
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
test-quantize-fns.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-quantize-perf.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-sampling.cpp
ci : integrate with ggml-org/ci (
#2250
)
2023-07-18 14:24:43 +03:00
test-tokenizer-0.cpp
mpi : add support for distributed inference via MPI (
#2099
)
2023-07-10 18:49:56 +03:00