llama.cpp/examples/imatrix
slaren 08a0c02060
ggml : mul_mat_id use the same tensor for all the experts (#6387)
* ggml : update mul_mat_id to use the same tensor for all the experts

* update cuda

* minor

* update metal

* update test-backend-ops

* fix cuda

* Update ggml-metal.m

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* update convert.py

* update convert-hf-to-gguf.py

* update convert.py for mixtral hf models

* Update convert-hf-to-gguf.py

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* cuda : support non-pow-2 number of experts

* allow quantize to work for split and merged experts models in the same way

* cleanup + disable mmap automatically with split tensors models

* update imatrix

* test-backend-ops : test qwen argsort

* update grok model loading

* llama : add merged experts tensors to the grok tensor map

* minor

* gguf : bump version

* fix quantizing of merged experts

* convert-hf-to-gguf.py : update grok (untested)

* make linter happy

* cuda/argsort : use shared memory instead of pool memory

* convert : fix grok tensor names

* metal : add support for non-pow-2 argsort

* llama : more loader cleanup, better error checking

* cuda : fix warning

* llama : still use mmap for loading old models, but copy the data to a host buffer

* add review note

* llama : remove ffn tensor counting + add sanity check

ggml-ci

* convert : fix handling of n_experts == None

ggml-ci

* imatrix : fix ncall counters

* llama : produce error if imatrix size does not match

* quantize : terminate on errors + trace logs

ggml-ci

* metal : pad shared memory to 16 bytes

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-03 16:07:05 +03:00
..
CMakeLists.txt Importance Matrix calculation (#4861) 2024-01-12 06:59:57 +01:00
imatrix.cpp ggml : mul_mat_id use the same tensor for all the experts (#6387) 2024-04-03 16:07:05 +03:00
README.md cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00

llama.cpp/examples/imatrix

Compute an importance matrix for a model and given text dataset. Can be used during quantization to enchance the quality of the quantum models. More information is available here: https://github.com/ggerganov/llama.cpp/pull/4861

Usage

./imatrix -m <some_fp_model> -f <some_training_data> [-o <output_file>] [--verbosity <verbosity_level>]
        [-ofreq num_chunks] [-ow <0 or 1>] [other common params]

Here -m with a model name and -f with a file containing training data (such as e.g. wiki.train.raw) are mandatory. The parameters in square brackets are optional and have the following meaning:

  • -o (or --output-file) specifies the name of the file where the computed data will be stored. If missing imatrix.dat is used.
  • --verbosity specifies the verbosity level. If set to 0, no output other than the perplexity of the processed chunks will be generated. If set to 1, each time the results are saved a message is written to stderr. If >=2, a message is output each time data is collected for any tensor. Default verbosity level is 1.
  • -ofreq (or --output-frequency) specifies how often the so far computed result is saved to disk. Default is 10 (i.e., every 10 chunks)
  • -ow (or --output-weight) specifies if data will be collected for the output.weight tensor. My experience is that it is better to not utilize the importance matrix when quantizing output.weight, so this is set to false by default.

For faster computation, make sure to use GPU offloading via the -ngl argument

Example

LLAMA_CUDA=1 make -j

# generate importance matrix (imatrix.dat)
./imatrix -m ggml-model-f16.gguf -f train-data.txt -ngl 99

# use the imatrix to perform a Q4_K_M quantization
./quantize --imatrix imatrix.dat ggml-model-f16.gguf ./ggml-model-q4_k_m.gguf q4_k_m