1
0
mirror of https://github.com/ggerganov/llama.cpp.git synced 2025-01-24 18:39:19 +01:00
llama.cpp/gguf-py/gguf
2025-01-12 12:15:53 +02:00
..
scripts gguf-py: fixed local detection of gguf package () 2025-01-11 11:42:31 +02:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor () 2024-07-18 20:40:15 +10:00
constants.py llama : remove notion of CLS token () 2025-01-12 12:15:53 +02:00
gguf_reader.py gguf-py : numpy 2 newbyteorder fix () 2024-12-13 16:48:44 +02:00
gguf_writer.py llama : remove notion of CLS token () 2025-01-12 12:15:53 +02:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files () 2023-11-11 08:04:50 +03:00
lazy.py gguf-py : simplify support for quant types () 2024-08-08 13:33:09 -04:00
metadata.py fix gguf-py: Conversion error when multiple licenses are configured () 2024-11-24 01:09:22 +01:00
py.typed convert : various script cleanups/fixes + merges and special token handling () 2023-08-30 11:25:50 +03:00
quants.py ggml-quants : ternary packing for TriLMs and BitNet b1.58 () 2024-09-05 21:48:47 -04:00
tensor_mapping.py llama: add support for QRWKV6 model architecture () 2025-01-10 09:58:08 +08:00
utility.py gguf-py : fix some metadata name extraction edge cases () 2024-07-20 21:58:49 -04:00
vocab.py convert : handle tokenizer merges format from transformers 4.45 () 2024-10-03 17:22:15 +03:00