llama.cpp/common
Michał Moskal ff227703d6
sampling : support for llguidance grammars (#10224)
* initial porting of previous LLG patch

* update for new APIs

* build: integrate llguidance as an external project

* use '%llguidance' as marker to enable llg lark syntax

* add some docs

* clarify docs

* code style fixes

* remove llguidance.h from .gitignore

* fix tests when llg is enabled

* pass vocab not model to llama_sampler_init_llg()

* copy test-grammar-integration.cpp to test-llguidance.cpp

* clang fmt

* fix ref-count bug

* build and run test

* gbnf -> lark syntax

* conditionally include llguidance test based on LLAMA_LLGUIDANCE flag

* rename llguidance test file to test-grammar-llguidance.cpp

* add gh action for llg test

* align tests with LLG grammar syntax and JSON Schema spec

* llama_tokenizer() in fact requires valid utf8

* update llg

* format file

* add $LLGUIDANCE_LOG_LEVEL support

* fix whitespace

* fix warning

* include <cmath> for INFINITY

* add final newline

* fail llama_sampler_init_llg() at runtime

* Link gbnf_to_lark.py script; fix links; refer to llg docs for lexemes

* simplify #includes

* improve doc string for LLAMA_LLGUIDANCE

* typo in merge

* bump llguidance to 0.6.12
2025-02-02 09:55:32 +02:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
arg.cpp embedding : enable --no-warmup option (#11475) 2025-01-29 10:38:54 +02:00
arg.h arg : option to exclude arguments from specific examples (#11136) 2025-01-08 12:55:36 +02:00
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
chat-template.hpp tool-call: fix llama 3.x and functionary 3.2, play nice w/ pydantic_ai package, update readme (#11539) 2025-01-31 14:15:25 +00:00
chat.cpp tool-call: fix llama 3.x and functionary 3.2, play nice w/ pydantic_ai package, update readme (#11539) 2025-01-31 14:15:25 +00:00
chat.hpp Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639) 2025-01-30 19:13:58 +00:00
CMakeLists.txt sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
common.cpp Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639) 2025-01-30 19:13:58 +00:00
common.h Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639) 2025-01-30 19:13:58 +00:00
console.cpp console : utf-8 fix for windows stdin (#9690) 2024-09-30 11:23:42 +03:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
json-schema-to-grammar.cpp sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
json-schema-to-grammar.h sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
json.hpp json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
llguidance.cpp sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
log.cpp common: Add missing va_end (#11529) 2025-01-31 07:58:55 +02:00
log.h common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
minja.hpp sync: minja (418a2364b5) (#11574) 2025-02-01 12:24:51 +00:00
ngram-cache.cpp llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
ngram-cache.h llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
sampling.cpp sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
sampling.h sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
speculative.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
speculative.h speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
stb_image.h common : Update stb_image.h to latest version (#9161) 2024-08-27 08:58:50 +03:00