Kawrakow 66d575c45c
llama : add Q3_K_XS (#5060)
* Add Q3_K_XS - intermediate size between Q2_K and Q3_K_S

* Q3_K_XS: quanize first 1/8 of ffn_down layers with Q4_K

Together with an importance matrix, this brings perplexity
for LLaMA-v2-70B below the perplexity of the former Q2_K
with a 800 MB smaller quantized model size.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-01-22 12:43:33 +02:00
..
2024-01-14 09:45:56 +02:00
2023-12-21 23:08:14 +02:00
2024-01-21 08:01:20 +02:00
2024-01-22 12:43:33 +02:00
2023-03-29 20:21:09 +03:00
2023-08-30 09:29:32 +03:00
2024-01-13 20:45:45 +02:00
2023-08-08 14:44:48 +03:00