mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 05:48:47 +01:00
f4ab2a4147
* merged the changes from deepseeker models to main branch * Moved regex patterns to unicode.cpp and updated unicode.h * Moved header files * Resolved issues * added and refactored unicode_regex_split and related functions * Updated/merged the deepseek coder pr * Refactored code * Adding unicode regex mappings * Adding unicode regex function * Added needed functionality, testing remains * Fixed issues * Fixed issue with gpt2 regex custom preprocessor * unicode : fix? unicode_wstring_to_utf8 * lint : fix whitespaces * tests : add tokenizer tests for numbers * unicode : remove redundant headers * tests : remove and rename tokenizer test scripts * tests : add sample usage * gguf-py : reader prints warnings on duplicate keys * llama : towards llama3 tokenization support (wip) * unicode : shot in the dark to fix tests on Windows * unicode : first try custom implementations * convert : add "tokenizer.ggml.pre" GGUF KV (wip) * llama : use new pre-tokenizer type * convert : fix pre-tokenizer type writing * lint : fix * make : add test-tokenizer-0-llama-v3 * wip * models : add llama v3 vocab file * llama : adapt punctuation regex + add llama 3 regex * minor * unicode : set bomb * unicode : set bomb * unicode : always use std::wregex * unicode : support \p{N}, \p{L} and \p{P} natively * unicode : try fix windows * unicode : category support via std::regex * unicode : clean-up * unicode : simplify * convert : add convert-hf-to-gguf-update.py ggml-ci * lint : update * convert : add falcon ggml-ci * unicode : normalize signatures * lint : fix * lint : fix * convert : remove unused functions * convert : add comments * convert : exercise contractions ggml-ci * lint : fix * cmake : refactor test targets * tests : refactor vocab tests ggml-ci * tests : add more vocabs and tests ggml-ci * unicode : cleanup * scripts : ignore new update script in check-requirements.sh * models : add phi-3, mpt, gpt-2, starcoder * tests : disable obsolete ggml-ci * tests : use faster bpe test ggml-ci * llama : more prominent warning for old BPE models * tests : disable test-tokenizer-1-bpe due to slowness ggml-ci --------- Co-authored-by: Jaggzh <jaggz.h@gmail.com> Co-authored-by: Kazim Abrar Mahi <kazimabrarmahi135@gmail.com> |
||
---|---|---|
.. | ||
.editorconfig | ||
ggml-vocab-aquila.gguf | ||
ggml-vocab-baichuan.gguf | ||
ggml-vocab-bert-bge.gguf | ||
ggml-vocab-bert-bge.gguf.inp | ||
ggml-vocab-bert-bge.gguf.out | ||
ggml-vocab-deepseek-coder.gguf | ||
ggml-vocab-deepseek-coder.gguf.inp | ||
ggml-vocab-deepseek-coder.gguf.out | ||
ggml-vocab-deepseek-llm.gguf | ||
ggml-vocab-deepseek-llm.gguf.inp | ||
ggml-vocab-deepseek-llm.gguf.out | ||
ggml-vocab-falcon.gguf | ||
ggml-vocab-falcon.gguf.inp | ||
ggml-vocab-falcon.gguf.out | ||
ggml-vocab-gpt2.gguf | ||
ggml-vocab-gpt-2.gguf | ||
ggml-vocab-gpt-2.gguf.inp | ||
ggml-vocab-gpt-2.gguf.out | ||
ggml-vocab-gpt-neox.gguf | ||
ggml-vocab-llama-bpe.gguf | ||
ggml-vocab-llama-bpe.gguf.inp | ||
ggml-vocab-llama-bpe.gguf.out | ||
ggml-vocab-llama-spm.gguf | ||
ggml-vocab-llama-spm.gguf.inp | ||
ggml-vocab-llama-spm.gguf.out | ||
ggml-vocab-mpt.gguf | ||
ggml-vocab-mpt.gguf.inp | ||
ggml-vocab-mpt.gguf.out | ||
ggml-vocab-phi-3.gguf | ||
ggml-vocab-phi-3.gguf.inp | ||
ggml-vocab-phi-3.gguf.out | ||
ggml-vocab-refact.gguf | ||
ggml-vocab-stablelm.gguf | ||
ggml-vocab-starcoder.gguf | ||
ggml-vocab-starcoder.gguf.inp | ||
ggml-vocab-starcoder.gguf.out |