* `build`: generate hex dumps of server assets on the fly
* build: workaround lack of -n on gnu xxd
* build: don't use xxd in cmake
* build: don't call xxd from build.zig
* build: more idiomatic hexing
* build: don't use xxd in Makefile (od hackery instead)
* build: avoid exceeding max cmd line limit in makefile hex dump
* build: hex dump assets at cmake build time (not config time)
* added fedora to list of distros that may need the package (the packages have the same name on Fedora)
* how to add clblast that is avalible in the fedora repos
* Support Llama 3 conversion
The tokenizer is BPE.
* style
* Accept suggestion
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
* llama : add llama_token_is_eog()
ggml-ci
* llama : auto-detect more EOT tokens when missing in KV data
* convert : replacing EOS token is a hack
* llama : fix codegemma EOT token + add TODOs
* llama : fix model type string for 8B model
---------
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* ggml : group all experts in a single ggml_mul_mat_id
cuda : improve mmid row copy
* cuda : fix bin bcast with non-cont src0
* test-backend-ops : only run all mul mat tests for base types
* llama : disable moe offloading with SYCL
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Support converting models with multiple chat templates
Adds the following metadata:
* tokenizer.chat_templates
* tokenizer.chat_template.<name1>
* tokenizer.chat_template.<name2>
* tokenizer.chat_template.<...>
Where `tokenizer.chat_templates` is an array of the template names (except `default`), `default` is added to the regular `tokenizer.chat_template`.
* replace filtered characters with underscore
* New script to add/modify/remove metadata
This scripts creates a copy of a GGUF file and allows you to add/modify/remove metadata in the process.
Most importantly this allows you to update chat templates, either as a string or directly from an updated tokenizer_config.json file.
* Add files via upload
add new script to project/readme
* flake--
* fix autoawq quantized gemma model convert error
using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error.
* change code to full string match and print necessary message
change code to full string match and print a short message to inform users that lm_head.weight has been skipped.
---------
Co-authored-by: Zheng.Deng <32841220+CUGfred@users.noreply.github.com>
This change upstreams llamafile's cpu matrix multiplication kernels
which improve image and prompt evaluation speed. For starters, Q4_0
and Q8_0 weights should go ~40% faster on CPU. The biggest benefits
are with data types like f16 / f32, which process prompts 2x faster
thus making them faster than quantized data types for prompt evals.
This change also introduces bona fide AVX512 support since tinyBLAS
is able to exploit the larger register file. For example, on my CPU
llama.cpp llava-cli processes an image prompt at 305 tokens/second,
using the Q4_K and Q4_0 types, which has always been faster than if
we used f16 LLaVA weights, which at HEAD go 188 tokens/second. With
this change, f16 LLaVA performance leap frogs to 464 tokens/second.
On Intel Core i9-14900K this change improves F16 prompt perf by 5x.
For example, using llama.cpp at HEAD with Mistral 7b f16 to process
a 215 token prompt will go 13 tok/sec. This change has fixes making
it go 52 tok/sec. It's mostly thanks to my vectorized outer product
kernels but also because I added support for correctly counting the
number of cores on Alderlake, so the default thread count discounts
Intel's new efficiency cores. Only Linux right now can count cores.
This work was sponsored by Mozilla who's given permission to change
the license of this code from Apache 2.0 to MIT. To read more about
what's improved, and how it works, see: https://justine.lol/matmul/
* StableLM2 12B support for huggingface -> GGUF
* StableLM12 tensormapping and constants
* StableLM-2-12b model support
* fix
* Added 12B support
* Removed autoformatting; resolved bug where model_arch was not selecting StableLM2
* Formatting
* Do QK norm stacking in model conversion step
* Converge StableLM and StableLM2 code to simplify graph construction
* Fix accidental removal
* Removed warnings
* Revert formatter
* Move QK norm stack to private function so it's easier to read
* refactor stablelm graph builder to support 1.6, 3b and 12b more efficiently
* Proper check for None type for new_name to avoid crash; formatting; revert change to base class `write_tensors()`
* Format
* Formatting
* format
Co-authored-by: compilade <git@compilade.net>
* Fix incorrect check for K norm
* space after commas; Keep indentation multiple of 4 spaces
* Flake8 format
* Removed unnecessary conditional branches
* Removed unused comment
* Fixed incorrect tensor passing
* Format
---------
Co-authored-by: compilade <git@compilade.net>
* support qwen2moe
* fix-review
* metal : support unary ops for nelements % 4 != 0
* metal : require contiguousness for float4 unary kernels
* metal : require contiguousness for float4 unary kernels (cont)
* fix-review
* names : for brevity "SHARED_EXP" -> "SHEXP"
* llama : reuse build_moe_ffn()
* llama : add model type name
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit updates the hf.sh script usage to include the --outdir option
and specifies the models directory as the output directory.
The motivation for this is to avoid cluttering the root directory with
model files.
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
This commit adds special token metadata for Fill-In-the-Middle
(FIM)/Infill to the GGUF model.
The motivation for this is that currently there is support for CodeLlama
but other models exist now like CodeGemma, but the different models use
different token ids for the special tokens and this commit allows for
supporting multiple models.
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
- Package.swift now supports conditional compilation based on OS
- Allows for package to be used by SPM on Non-Apple platforms
Co-authored-by: Steven Prichard <steven.prichard@justeattakeaway.com>
* Add chat template for command-r model series
* Fix indentation
* Add chat template test for command-r models and update the implementation to trim whitespaces
* Remove debug print
* Fix --split-max-size
Byte size calculation was done on int and overflowed.
* add tests.sh
* add examples test scripts to ci run
Will autodiscover examples/*/tests.sh scripts and run them.
* move WORK_PATH to a subdirectory
* clean up before and after test
* explicitly define which scripts to run
* add --split-max-size to readme
* disable mmap to fix memcpy crash, add missed cmd in guide, fix softmax
* refactor to disable mmap for SYCL backend
* fix compile error in other os
* refactor the solution, use host buf to fix it, instead of disable mmap
* keep to support mmap()
* use host buff to reduce malloc times
* revert to malloc/free solution, for threaad safe