Kerfuffle 4f0154b0ba
llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691)
* Add support for quantizing already quantized models

* Threaded dequantizing and f16 to f32 conversion

* Clean up thread blocks with spares calculation a bit

* Use std::runtime_error exceptions.
2023-06-10 10:59:17 +03:00
..
2023-06-04 23:34:30 +03:00
2023-03-29 20:21:09 +03:00
2023-03-25 21:51:41 +02:00
2023-06-04 23:34:30 +03:00