mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-01 07:30:17 +01:00
0becb22ac0
* Try IQ4_NL with blocks of 64 - does not look good * iq4_xs: go to super-blocks of 256 and 6-bit scales for blocks of 32 * iq4_xs: CUDA works - 133.2 t/s * iq4_xs: AVX2 dot product * iq4_xs: ARM_NEON dot product * iq4_nl: Metal implementation As usual, Metal / Apple Silicon don't like my quants. * iq3_xs: minor fix * iq4_xs: shrink by using IQ3_S for attn_k and attn_q * iq4_xs: revert using IQ3_S for attn_k and attn_v PPL vs size is good, but CPU performance suffers: on M2 Max TG-128 drops to 21.7 t/s from 28.8, and on a Ryzen-7950X to 14.5 t/s from 15.8 t/s. On CUDA we have 135 t/s when using IQ3_S vs 133 t/s with pure IQ4_XS. * Fix CI * iq4_xs: Added forgotten check for 256 divisibility --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
quantize.cpp | ||
README.md |
quantize
TODO
Llama 2 7B
Quantization | Bits per Weight (BPW) |
---|---|
Q2_K | 3.35 |
Q3_K_S | 3.50 |
Q3_K_M | 3.91 |
Q3_K_L | 4.27 |
Q4_K_S | 4.58 |
Q4_K_M | 4.84 |
Q5_K_S | 5.52 |
Q5_K_M | 5.68 |
Q6_K | 6.56 |
Llama 2 13B
Quantization | Bits per Weight (BPW) |
---|---|
Q2_K | 3.34 |
Q3_K_S | 3.48 |
Q3_K_M | 3.89 |
Q3_K_L | 4.26 |
Q4_K_S | 4.56 |
Q4_K_M | 4.83 |
Q5_K_S | 5.51 |
Q5_K_M | 5.67 |
Q6_K | 6.56 |
Llama 2 70B
Quantization | Bits per Weight (BPW) |
---|---|
Q2_K | 3.40 |
Q3_K_S | 3.47 |
Q3_K_M | 3.85 |
Q3_K_L | 4.19 |
Q4_K_S | 4.53 |
Q4_K_M | 4.80 |
Q5_K_S | 5.50 |
Q5_K_M | 5.65 |
Q6_K | 6.56 |