mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 14:20:31 +01:00
4f0154b0ba
* Add support for quantizing already quantized models * Threaded dequantizing and f16 to f32 conversion * Clean up thread blocks with spares calculation a bit * Use std::runtime_error exceptions. |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
quantize.cpp | ||
README.md |
quantize
TODO