llama.cpp/examples
Olivier Chafik 6171c9d258
Add Jinja template support (#11016)
* Copy minja from 58f0ca6dd7

* Add --jinja and --chat-template-file flags

* Add missing <optional> include

* Avoid print in get_hf_chat_template.py

* No designated initializers yet

* Try and work around msvc++ non-macro max resolution quirk

* Update test_chat_completion.py

* Wire LLM_KV_TOKENIZER_CHAT_TEMPLATE_N in llama_model_chat_template

* Refactor test-chat-template

* Test templates w/ minja

* Fix deprecation

* Add --jinja to llama-run

* Update common_chat_format_example to use minja template wrapper

* Test chat_template in e2e test

* Update utils.py

* Update test_chat_completion.py

* Update run.cpp

* Update arg.cpp

* Refactor common_chat_* functions to accept minja template + use_jinja option

* Attempt to fix linkage of LLAMA_CHATML_TEMPLATE

* Revert LLAMA_CHATML_TEMPLATE refactor

* Normalize newlines in test-chat-templates for windows tests

* Forward decl minja::chat_template to avoid eager json dep

* Flush stdout in chat template before potential crash

* Fix copy elision warning

* Rm unused optional include

* Add missing optional include to server.cpp

* Disable jinja test that has a cryptic windows failure

* minja: fix vigogne (https://github.com/google/minja/pull/22)

* Apply suggestions from code review

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Finish suggested renamings

* Move chat_templates inside server_context + remove mutex

* Update --chat-template-file w/ recent change to --chat-template

* Refactor chat template validation

* Guard against missing eos/bos tokens (null token otherwise throws in llama_vocab::impl::token_get_attr)

* Warn against missing eos / bos tokens when jinja template references them

* rename: common_chat_template[s]

* reinstate assert on chat_templates.template_default

* Update minja to b8437df626

* Update minja to https://github.com/google/minja/pull/25

* Update minja from https://github.com/google/minja/pull/27

* rm unused optional header

---------

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-01-21 13:18:51 +00:00
..
batched llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
batched-bench llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
batched.swift llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
convert-llama2c-to-ggml llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
cvector-generator llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
deprecation-warning Update deprecation-warning.cpp (#10619) 2024-12-04 23:19:20 +01:00
embedding llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
eval-callback llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
export-lora export-lora : fix tok_embd tensor (#11330) 2025-01-21 14:07:12 +01:00
gbnf-validator llama : minor grammar refactor (#10897) 2024-12-19 17:42:13 +02:00
gen-docs ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gguf-hash GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gguf-split ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
gritlm llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
imatrix llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
infill llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
llama.android llama.android: add field formatChat to control whether to parse special tokens when send message (#11270) 2025-01-17 14:57:56 +02:00
llama.swiftui llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llava llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
lookahead llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
lookup llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
main Add Jinja template support (#11016) 2025-01-21 13:18:51 +00:00
main-cmake-pkg ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
parallel llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
passkey llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
perplexity llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
quantize ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
quantize-stats llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
retrieval llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
rpc rpc-server : add support for the SYCL backend (#10934) 2024-12-23 10:39:30 +02:00
run Add Jinja template support (#11016) 2025-01-21 13:18:51 +00:00
save-load-state llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
server Add Jinja template support (#11016) 2025-01-21 13:18:51 +00:00
simple llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
simple-chat Add Jinja template support (#11016) 2025-01-21 13:18:51 +00:00
speculative llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
speculative-simple llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
sycl [SYCL]set context default value to avoid memory issue, update guide (#9476) 2024-09-18 08:30:31 +08:00
tokenize llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
tts tts : add guide tokens support (#11186) 2025-01-18 12:20:57 +02:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh scripts : fix pattern and get n_tokens in one go (#10221) 2024-11-09 09:06:54 +02:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt tts : add OuteTTS support (#10784) 2024-12-18 19:27:21 +02:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
llama.vim llama.vim : bump generation time limit to 3s [no ci] 2024-10-23 17:16:56 +03:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) 2024-07-20 22:09:17 -04:00
pydantic_models_to_grammar.py pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
server_embd.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00