llama.cpp/examples
2024-06-12 12:10:29 -04:00
..
baby-llama ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
batched examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
batched-bench examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
batched.swift examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
benchmark ggml : remove old quantization functions (#5942) 2024-03-09 15:53:59 +02:00
convert-llama2c-to-ggml train : change default FA argument (#7528) 2024-05-25 15:22:35 +03:00
embedding examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
eval-callback common : refactor cli arg parsing (#7675) 2024-06-04 21:23:39 +03:00
export-lora ci : add an option to fail on compile warning (#3952) 2024-02-17 23:03:14 +02:00
finetune ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
gbnf-validator grammars: 1.5x faster inference w/ complex grammars (vector reserves / reuses) (#6609) 2024-04-11 19:47:34 +01:00
gguf gguf : add option to not check tensor data (#6582) 2024-04-10 21:16:48 +03:00
gguf-split gguf-split : change binary multi-byte units to decimal (#7803) 2024-06-07 15:56:01 +03:00
gritlm examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
imatrix Merge branch 'master' into compilade/refactor-kv-cache 2024-06-12 12:10:29 -04:00
infill examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
jeopardy parallel : add option to load external prompt file (#3416) 2023-10-06 16:16:38 +03:00
llama-bench Merge branch 'master' into compilade/refactor-kv-cache 2024-06-12 12:10:29 -04:00
llama.android examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
llama.swiftui examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
llava common : refactor cli arg parsing (#7675) 2024-06-04 21:23:39 +03:00
lookahead examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
lookup examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
main examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
main-cmake-pkg ggml : remove OpenCL (#7735) 2024-06-04 21:23:20 +03:00
parallel examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
passkey examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
perplexity examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
quantize common : refactor cli arg parsing (#7675) 2024-06-04 21:23:39 +03:00
quantize-stats Improve usability of --model-url & related flags (#6930) 2024-04-30 00:52:50 +01:00
retrieval examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
rpc Revert "[SYCL] Update rpc-server.cpp to include SYCL backend (#7682)" (#7808) 2024-06-09 01:43:39 +02:00
save-load-state examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
server Merge branch 'master' into compilade/refactor-kv-cache 2024-06-12 12:10:29 -04:00
simple common : refactor cli arg parsing (#7675) 2024-06-04 21:23:39 +03:00
speculative examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
sycl add build shared lib in win release package (#7438) 2024-05-24 10:06:56 +08:00
tokenize Make tokenize CLI tool have nicer command line arguments. (#6188) 2024-05-25 11:14:42 +10:00
train-text-from-scratch ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
base-translate.sh examples : improve base-translate.sh script (#4783) 2024-01-06 11:40:24 +02:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh llama : fix session saving/loading (#3400) 2023-10-03 21:04:01 +03:00
chat-vicuna.sh examples : add chat-vicuna.sh (#1854) 2023-06-15 21:05:53 +03:00
chat.sh main : log file (#2748) 2023-08-30 09:29:32 +03:00
CMakeLists.txt llama : remove beam search (#7736) 2024-06-04 21:23:05 +03:00
convert-legacy-llama.py ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
json_schema_to_grammar.py json: refine constraint for whitespace to avoid runaways yet allow pretty print (#7866) 2024-06-11 02:22:57 +01:00
json-schema-pydantic-example.py json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287) 2023-07-21 11:13:18 +03:00
pydantic_models_to_grammar.py grammars: x{min,max} repetition operator (#6640) 2024-06-06 10:07:06 +01:00
pydantic-models-to-grammar-examples.py examples : make pydantic scripts pass mypy and support py3.8 (#5099) 2024-01-25 14:51:24 -05:00
reason-act.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
regex-to-grammar.py JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00
server-embd.py server : refactor (#5882) 2024-03-07 11:41:53 +02:00
server-llama2-13B.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00