llama.cpp/gguf-py/gguf
Robert Collins 62e84d9848
llama : add 128k yarn context for Qwen (#10698)
* add 128k yarn context for Qwen

* added property for model tensors

* removing useless line
2024-12-07 23:12:27 +02:00
..
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py llama : add 128k yarn context for Qwen (#10698) 2024-12-07 23:12:27 +02:00
gguf_reader.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
gguf_writer.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
lazy.py gguf-py : simplify support for quant types (#8838) 2024-08-08 13:33:09 -04:00
metadata.py fix gguf-py: Conversion error when multiple licenses are configured (#9807) 2024-11-24 01:09:22 +01:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
tensor_mapping.py convert : add custom attention mapping 2024-12-06 21:33:49 +02:00
utility.py gguf-py : fix some metadata name extraction edge cases (#8591) 2024-07-20 21:58:49 -04:00
vocab.py convert : handle tokenizer merges format from transformers 4.45 (#9696) 2024-10-03 17:22:15 +03:00