llama.cpp/examples
Kerfuffle 6e08281e58
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
* Extend llama_kv_cache_seq_rm to allow matichng any sequence

* Replace llama_kv_cache_tokens_rm with llama_kv_cache_clear

Use llama_kv_cache_clear for cache clearing

Change calls to llama_kv_cache_tokens_rm that want to delete by position to use llama_kv_cache_seq_rm functionality
2023-10-29 11:31:40 -06:00
..
baby-llama build : enable more non-default compiler warnings (#3200) 2023-09-28 17:41:44 -04:00
batched cuda : add batched cuBLAS GEMM for faster attention (#3749) 2023-10-24 16:48:37 +03:00
batched-bench Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843) 2023-10-29 11:31:40 -06:00
batched.swift speculative : add tree-based sampling example (#3624) 2023-10-18 16:21:57 +03:00
beam-search llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
benchmark
convert-llama2c-to-ggml gguf : support big endian platform (#3552) 2023-10-20 14:19:40 +03:00
embedding llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
export-lora train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00
finetune ggml : add context enumeration functions (#3605) 2023-10-13 12:23:10 +02:00
gguf
infill llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
jeopardy parallel : add option to load external prompt file (#3416) 2023-10-06 16:16:38 +03:00
llama-bench Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843) 2023-10-29 11:31:40 -06:00
llava llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
main Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843) 2023-10-29 11:31:40 -06:00
main-cmake-pkg cmake : add missed dependencies (#3763) 2023-10-24 20:48:45 +03:00
metal
parallel llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
perplexity Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843) 2023-10-29 11:31:40 -06:00
quantize ggml : quantization refactoring (#3833) 2023-10-29 18:32:28 +02:00
quantize-stats llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
save-load-state save-load-state : fix example + add ci test (#3655) 2023-10-17 19:12:46 +03:00
server Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843) 2023-10-29 11:31:40 -06:00
simple simple : fix batch handling (#3803) 2023-10-27 08:37:41 -06:00
speculative llama : add option for greedy sampling with probs (#3813) 2023-10-28 14:23:11 +03:00
train-text-from-scratch train-text-from-scratch : fix assert failure in ggml-alloc (#3618) 2023-10-17 20:00:58 +03:00
alpaca.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh llama : fix session saving/loading (#3400) 2023-10-03 21:04:01 +03:00
chat-vicuna.sh
chat.sh
CMakeLists.txt sampling : refactor init to use llama_sampling_params (#3696) 2023-10-20 21:07:23 +03:00
gpt4all.sh
json-schema-to-grammar.py
llama2-13b.sh
llama2.sh
llama.vim
llm.vim
make-ggml.py
Miku.sh
reason-act.sh
server-llama2-13B.sh