This website requires JavaScript.
Explore
Help
Register
Sign In
Mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-10 12:30:50 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
llama.cpp
/
src
History
Georgi Gerganov
8841ce3f43
llama : switch KQ multiplication to F32 precision by default (
#10015
)
...
ggml-ci
2024-10-27 20:59:58 +02:00
..
CMakeLists.txt
llama : move vocab, grammar and sampling into separate files (
#8508
)
2024-07-23 13:10:17 +03:00
llama-grammar.cpp
llama : refactor sampling v2 (
#9294
)
2024-09-07 15:16:19 +03:00
llama-grammar.h
llama : refactor sampling v2 (
#9294
)
2024-09-07 15:16:19 +03:00
llama-impl.h
log : add CONT level for continuing previous log entry (
#9610
)
2024-09-24 10:15:35 +03:00
llama-sampling.cpp
llama : add DRY sampler (
#9702
)
2024-10-25 19:07:34 +03:00
llama-sampling.h
llama : add DRY sampler (
#9702
)
2024-10-25 19:07:34 +03:00
llama-vocab.cpp
llama : add DRY sampler (
#9702
)
2024-10-25 19:07:34 +03:00
llama-vocab.h
llama : add DRY sampler (
#9702
)
2024-10-25 19:07:34 +03:00
llama.cpp
llama : switch KQ multiplication to F32 precision by default (
#10015
)
2024-10-27 20:59:58 +02:00
unicode-data.cpp
server : better security control for public deployments (
#9776
)
2024-10-08 13:27:04 +02:00
unicode-data.h
llama : reduce compile time and binary size (
#9712
)
2024-10-02 15:49:55 +02:00
unicode.cpp
llama : reduce compile time and binary size (
#9712
)
2024-10-02 15:49:55 +02:00
unicode.h
llama : move vocab, grammar and sampling into separate files (
#8508
)
2024-07-23 13:10:17 +03:00