mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-10-30 14:40:16 +01:00
f4ab2a4147
* merged the changes from deepseeker models to main branch * Moved regex patterns to unicode.cpp and updated unicode.h * Moved header files * Resolved issues * added and refactored unicode_regex_split and related functions * Updated/merged the deepseek coder pr * Refactored code * Adding unicode regex mappings * Adding unicode regex function * Added needed functionality, testing remains * Fixed issues * Fixed issue with gpt2 regex custom preprocessor * unicode : fix? unicode_wstring_to_utf8 * lint : fix whitespaces * tests : add tokenizer tests for numbers * unicode : remove redundant headers * tests : remove and rename tokenizer test scripts * tests : add sample usage * gguf-py : reader prints warnings on duplicate keys * llama : towards llama3 tokenization support (wip) * unicode : shot in the dark to fix tests on Windows * unicode : first try custom implementations * convert : add "tokenizer.ggml.pre" GGUF KV (wip) * llama : use new pre-tokenizer type * convert : fix pre-tokenizer type writing * lint : fix * make : add test-tokenizer-0-llama-v3 * wip * models : add llama v3 vocab file * llama : adapt punctuation regex + add llama 3 regex * minor * unicode : set bomb * unicode : set bomb * unicode : always use std::wregex * unicode : support \p{N}, \p{L} and \p{P} natively * unicode : try fix windows * unicode : category support via std::regex * unicode : clean-up * unicode : simplify * convert : add convert-hf-to-gguf-update.py ggml-ci * lint : update * convert : add falcon ggml-ci * unicode : normalize signatures * lint : fix * lint : fix * convert : remove unused functions * convert : add comments * convert : exercise contractions ggml-ci * lint : fix * cmake : refactor test targets * tests : refactor vocab tests ggml-ci * tests : add more vocabs and tests ggml-ci * unicode : cleanup * scripts : ignore new update script in check-requirements.sh * models : add phi-3, mpt, gpt-2, starcoder * tests : disable obsolete ggml-ci * tests : use faster bpe test ggml-ci * llama : more prominent warning for old BPE models * tests : disable test-tokenizer-1-bpe due to slowness ggml-ci --------- Co-authored-by: Jaggzh <jaggz.h@gmail.com> Co-authored-by: Kazim Abrar Mahi <kazimabrarmahi135@gmail.com> |
||
---|---|---|
.. | ||
build-info.cmake | ||
build-info.sh | ||
check-requirements.sh | ||
ci-run.sh | ||
compare-commits.sh | ||
compare-llama-bench.py | ||
convert-gg.sh | ||
gen-authors.sh | ||
gen-build-info-cpp.cmake | ||
get-flags.mk | ||
get-hellaswag.sh | ||
get-pg.sh | ||
get-wikitext-2.sh | ||
get-wikitext-103.sh | ||
get-winogrande.sh | ||
hf.sh | ||
install-oneapi.bat | ||
LlamaConfig.cmake.in | ||
pod-llama.sh | ||
qnt-all.sh | ||
run-all-perf.sh | ||
run-all-ppl.sh | ||
run-with-preset.py | ||
server-llm.sh | ||
sync-ggml-am.sh | ||
sync-ggml.last | ||
sync-ggml.sh | ||
verify-checksum-models.py | ||
xxd.cmake |