llama.cpp/examples
2023-09-05 15:10:27 -04:00
..
baby-llama
beam-search
benchmark
convert-llama2c-to-ggml
embd-input
embedding
gguf examples : replace fprintf to stdout with printf (#3017) 2023-09-05 15:10:27 -04:00
gptneox-wip examples : replace fprintf to stdout with printf (#3017) 2023-09-05 15:10:27 -04:00
jeopardy
llama-bench examples : replace fprintf to stdout with printf (#3017) 2023-09-05 15:10:27 -04:00
main build : on Mac OS enable Metal by default (#2901) 2023-09-04 22:26:24 +03:00
metal
perplexity build : on Mac OS enable Metal by default (#2901) 2023-09-04 22:26:24 +03:00
quantize Allow quantize to only copy tensors, some other improvements (#2931) 2023-09-01 08:02:48 -06:00
quantize-stats
save-load-state
server examples : replace fprintf to stdout with printf (#3017) 2023-09-05 15:10:27 -04:00
simple
speculative speculative : add grammar support (#2991) 2023-09-05 08:46:17 +03:00
train-text-from-scratch
alpaca.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt speculative : PoC for speeding-up inference via speculative sampling (#2926) 2023-09-03 15:12:08 +03:00
gpt4all.sh
json-schema-to-grammar.py
llama2-13b.sh
llama2.sh
llama.vim
llm.vim
make-ggml.py
Miku.sh
reason-act.sh
server-llama2-13B.sh